studying the factors affecting the accuracy of

0 downloads 0 Views 4MB Size Report
replaced by a mechanical space rod. Hence a mechanical ..... seen, [Manual of U- Lead System's iphoto Deluxe, 1992]. To obtain a .... eliminated in the DPW, such as, the measuring components such as encoders; also, in most ...... 431-436. Karara, H.M., Abdel-Aziz, Y.J, 1984, 'Accuracy Aspects of Non-Metric. Imageries' ...
STUDYING SOME FACTORS AFFECTING THE ACCURACY OF DIGITAL PHOTOGRAMMETRIC APPLICATIONS By

Eng. Ahmed Abdelhafiz Ahmed Abdelhafiz B.Sc. Civil Engineering, Assuit University, 1996

A THESIS

Submitted in Partial Fulfilment of the Requirements for the Degree of

Master of Science

Department of civil Engineering University of Assuit Assuit, Egypt 2000

Supervised by: Prof. Dr. A. M. Abd-El Wahed Assis. Prof. Dr. A.Abd-El Rahem Assis. Prof. Dr. Fraag A. Fraag

Examined by: Prof. Dr. M. El Nokrashy Prof. Dr. M. Dief-Allah Prof. Dr. A. M. Abd-El Wahed Assis. Prof. Dr. Fraag A. Fraag

‫ﺑﺴﻢ ﺍﷲ ﺍﻟﺮﲪﻦ ﺍﻟﺮﺣﻴﻢ‬

To my Parents

Acknowledgment

Acknowledgment

ACKNOWLEDGMENT

Thanks to God the master of the word for guiding and helping me to achieve this thesis. I would like to express my deeply acknowledgment to Prof.Dr. AbdelAal Mohamed Abdel-Wahed, Dr. Ahmed Abdel-Rahiem and Dr. Farag Ali Farag For their supervision, guidance and encouragement throughout the research. I would also like to express my thanks to my examiners: Prof.Dr. M. El-Nokrashi and Prof.Dr. M. Dief Allah. I wish to express my deepest thanks to my brother Eng. Ashraf Abd-El Hafeez for his constant encouragement. I would also like to thank Eng. Ahmed Yousri for his support during the research. Last but not least, this work would not have been possible without the enduring support, patience, and understanding of my family.

Contents

CONTENTS Page CHAPTER 1: Introduction ……………………………………… 1.1 General…...……………………………………... 1.2 Objective and Scope of the Research Work… …

1 1 1

CHAPTER 2: Literature Review..............…............................…… 2.1 Introduction……………………………………... 2.2 Types of Cameras……………………………….. 2.2.1 Metric Cameras……………………………... 2.2.2 Non- Metric Cameras………………………. 2.3 Camera Calibration……………………………… 2.4 Image Coordinates Errors and it’s Refinement…. 2.4.1 Image Deformation…….……...……………. 2.4.2 Lens Distortions…….………...…………….. 2.5 Data Reduction…………………..…………...…. 2.5.1 Analog Approaches……….………………... 2.5.2 Analytical Approaches …………………….. 2.5.3 Digital Approaches …..…………………….. 2.6 Mathematical Models…………………………… 2.7 Digital Photogrammetry………………….…..…. 2.7.1 Digital Image…….………………………... 2.7.2 Digital Image Resolution………………….. 2.7.3 Digital Image Acquisition.………………... 2.7.4 Digital Image Data Processing……………. 2.7.5 Advantages of Digital Photogrammetry…... 2.7.6 Fully Digital Photogrammetric Systems …. 2.8 Factors Affecting the Accuracy of the Photogrammetric Systems….…………….…….

3 3 4 4 5 8 10 11 13 16 16 17 19 19 27 28 30 30 33 34 35

a

41

Contents

Page 2.8.1 The Factors Affecting the Traditional Photogrammetric Systems….…………….……. 2.8.2 The Factors Affecting the Digital Photogrammetric Systems….…………….……. 2.9 Review of some Engineering Applications …….. CHAPTER 3: Methodology and Developed Tools for Experimental Work…..…………………………… 3.1 Introduction………..……….…………………. 3.2 Instruments ……………….…………………... 3.3 Computer Programs (Softwares) ……………… 3.3.1 Photo Measurement Program……………... 3.3.2 Base Line Program……………………...…... 3.3.2.1 Accuracy of the computed coordinates..... 3.3.3 BSC Program……………..………..………... 3.3.4 RMS Program……………...………..……….

42 46 49

53 53 56 62 63 68 69 76 79

CHAPTER 4:

Results and Their Analysis …….............................. 4.1 Introduction ….…………..…………………….. 4.2 Number of control points…………………….….. 4.3 (B/H) Ratio …………………..…………………. 4.4 Resolution……………….…………..…………... 4.4.1 Digitizing Resolution…...…………………... 4.4.2 Ground Resolution…...……………………...

80 80 82 86 89 91 103

CHAPTER 5:

Digital Photogrammetric Applications………….. 5.1 Introduction ………..…………..…………..…… 5.2 Architectural Application.….………………….... 5.2.1 First Case: The Facade of the Gate of Assiut University campus. …..…………..

109 109 110

b

110

Contents

Page 5.2.2 Second Case: The Facade of the Old Building of Assiut University ……..……… 5.2.3 Third Case: The Facade of the Faculty of Medicine – Assiut University …………... CHAPTER 6:

115 120

Conclusions and Recommendations ….....……….. 6.1 Conclusions………..…………..….………..…… 6.2 Recommendations ………………………………

125 125 126

REFERENCES: ………..……………………………...……………..

127

Appendix ‘A’

Sample of the Used Computer Files………………

134

Appendix ‘B’

The Code of the Designed Computer Programs….

142

c

Abstract

ABSTRACT Digital has become the universal catchword of the last decade in many disciplines. Now, digital photogrammetry has transitioned from computerassisted plotters and analytical plotters to digital photogrammetric workstations. Studying some of the factors influencing the accuracy of the digital photogrammetric applications is the main objective of this thesis. The studied factors are number of control points, base to camera-object distance (B/H) ratio and resolution. To achieve this objective a test model was photographed using a non-metric camera. A photographing plan was prepared to allow obtaining stereo-models with different parameters. A commercial colour flat bed scanner was used for digitizing the photos. A computer program has been developed, using the Visual Basic language, to display the photos on the computer screen and to perform the required measurements in an interactive way. A bundle adjustment with self-calibration computer program ‘BSC’ based on the collinearity equations was used to obtain the ground coordinates of the measured photo points. It has been found that for one stereo pair, eleven control points and about 0.6 to 0.7 B/H ratio give optimum accuracy. It has, also, found that the accuracy increases with the increase of digitizing resolution with different rates. A relationship between the accuracy and the ground pixel size was obtained. This work is extended to apply the obtained optimum photographing conditions on photogrammetric applications in the field of architecture inside the campus of Assuit University. The obtained accuracy was about (18 to 20)

μ m of photographic scale in the plane of the façade and about (26 to 28) μ m in the perpendicular plane, which corresponds to (0.9 to 1.0) from the pixel size in the plane of the façade and about (1.3 to 1.4) from the pixel size in the perpendicular plane.

Chapter (1): Introduction

1

Chapter (1)

INTRODUCTION 1.1 General:

Digital photogrammetry is currently acknowledged to be the way for the future and we live in time of transition to digital photogrammetry. Digital Photogrammetric Workstations (DPW) do not only solve the standard photogrammetric tasks and put data in digital form but also introduce a new quality standard for photogrammetric operation. DPW solution increase productivity into photogrammetric production, but this technology has been considered to be expensive until the personal computers are used which is costeffective way [Pivenicka et al, 1996]. In the civilian market segment, Digital Photogrammetric Stations are around for about one decade as University based solutions, and later as commercial products. In many countries, the digital systems have already outnumbered the analytical plotters. Now the mapping profession has transitioned from computerassisted plotters to digital photogrammetric workstations [Light, 1999].

1.2 Objective and Scope of the Research Work:

The main objective of this work is to study some of the factors affecting the accuracy of ground coordinates computed from digital photogrammetry, and to assess the suitability of these coordinates for architectural applications. The following factors are studied in details: 1-

Number of control points.

1

Chapter (1): Introduction

2

2-

Base to camera-object distance (B/H) ratio.

3-

Resolution (digitized resolution and ground resolution).

In order to achieve the objective of this research a non-metric camera was used to obtain terrestrial photographs to construct stereo-models with different parameters. The resulted photos were converted to the digital form using a commercial flat bed colour scanner. The Photo Measurements were carried out on the screen of a personal computer using a computer program ‘Photo Measurement’ developed specially for this purpose using the visual basic language. The ground measurements of control and check points were carried out using a ‘TOPCON ITS-1’ total station. Another visual basic computer program is prepared to process the ground measurements using the base line method. A Bundle adjustment with Self-Calibration ‘BSC’ program developed by Novak (1991) based on the collinearity equations is used to compute the space coordinates. The root mean squares of the check points were calculated and analyzed. Study some of architectural applications inside the campus of Assuit University were carried out using the developed system. A brief outline of the contents of this thesis can be presented as follows: A literature review for the previous work is discussed in the second chapter. The third chapter describes the hardware components used in this investigation and, also, includes a brief explanation of the developed computer programs. The fourth chapter presents the analysis of the results. The engineering applications carried out in this research is presented in the fifth chapter. The sixth chapter contains conclusions and recommendations. Appendix ‘A’ includes an examples of the input and output computer files. The code of the developed programs is presented in Appendix ‘B’. Finally, a floppy disk contains a softcopy of the ‘Photo Measurement’ program is attached.

Chapter (2): Literature Review

3

Chapter (2)

Photogrammetry

2.1 Introduction: The primary characteristic of photogrammetry as a metrology technique is that the measurements are carried out indirectly. Classically, the object or phenomenon to be evaluated was photographed from two or more locations and the measurements made in the overlapping photographs using a wide range of methods, from simple scales to sophisticated computer-assisted equipment. Photogrammetry is ideally suited for precision non-contact measurements for industrial applications, particularly for objects which are difficult to be precisely measured by a direct or contact method [Lee, 1999]. Early in the application of non-topographic photogrammetry, overlapping frame photographs were usually taken by well-controlled metric cameras. These photographs were then used in analog optical-mechanical plotters to extract information about the photographed object. Such information was for the most part graphical, for example, plots of contours, profiles, cross-sections, and various engineering drawings. Progress was subsequently made both in acquisition and in reduction. The photos can be acquired by a small-format metric or stereometric cameras, large-format metric cameras and a variety of non-metric cameras. The use of this various cameras required the development of suitable mathematical models for their reduction and significant increase in sophistication in the design and implementation of software for multi-photo reduction with self-calibration and quality assurance [ASPRS, 1989]. 3

Chapter (2): Literature Review

4

2 . 2 Types of Cameras

There are two basic types of cameras metric and non-metric. A metric camera, is characterized by a stable, known, and repeatable interior orientation, defined by fiducial marks; many are equipped with measuring or setting devices to be used in connection with the exterior orientation as well. A non-metric camera, on the other hand, is a camera whose interior orientation is completely or partially unknown and frequently unstable. All “amateur” or “off the shelf” cameras belong to this category and are perhaps rather easily classified by the lack of fiducial marks. A semi-metric camera falls some-where in between [Faig, 1976]. 2. 2 . 1 Metric Cameras

The term metric camera as used herein includes those cameras manufactured especially for photogrammetric applications. Metric cameras have fiducial marks built into their focal plane, which enable accurate recovery of the principal point. They are stable constructed and completely calibrated before use. Their calibration values for focal length (C.F.L), principle point coordinates (x0, y0) and lens distortions can be applied with confidence over long period of time. In the past, most of the available metric cameras are designed to be used with glass plates as photographic emulsion base. This is the best solution possible from the point of view of stability, [Mahani, 1981]. In case of using traditional films, certain precautions have to be taken to

Chapter (2): Literature Review

5

flatten the film before exposure and to control the stability of the film during the photographic processing. There are two main types of metric cameras are single cameras and stereo metric cameras [ASPRS-Non-Topographic Photogrammetry, Second Edition, 1989]. 2. 2 . 2 Non-Metric Camera

Non-metric cameras are cameras that have not been designed especially for photogrammetric purposes. Early in the1970s, the use of non-metric cameras has expanded considerably and has made an impact on a large number of areas where measurements are required, as stated in the 1976 report of ISP working group V/2 [Faig, 1976]. Fraser, 1982, added that although various types of photogrammetric camera are available, there is considerable use for ‘off the shelf’ non-metric cameras. Non-metric cameras have a number of physical and geometrical shortcomings which have to be compensated for by physical evaluation procedure. So, one may identify the problems as follows: ♦

Definition of interior orientation: The basic parameters of inte-

rior orientation are provided by the spatial x/y/z coordinates of the projection center within a photo-coordinate system whose x/y plane coincides with the image plane. If at all, this image plane is poorly defined for non-metric cameras, and without fiducial marks, the x and y axes within this plane are not fixed. ♦ Stability: Once the interior orientation is defined, its parameters are assumed to remain constant or to change only by a known amount (e.g.,

6

Chapter (2): Literature Review

when changing the focus). This requires a degree of internal stability and rigidity that is not usually available with non-metric cameras. Therefore, the interior orientation parameters change for each exposure taken with a nonmetric camera. ♦

Distortions: There are a number of physical influences that

cause deviations from theoretical central perspective. The most important ones are radial (symmetric) and decentering lens distortions. The latter often expressed by its components as asymmetric radial, and tangential distortions. While these distortions are unique for each individual lens, and thus occur in both metric and non-metric cameras, their amounts are often kept negligible by design for metric cameras. In addition to commonly large distortions, the situation for non-metric cameras is further complicated by focusing changes which result in changes in these distortions. ♦

Film deformation: Two factors contribute to this error, namely

how well the image plane was established at the time of exposure and how the film was processed and handled afterwards. Even with vacuum backs and solid frames, this error source provides the largest uncertainty for metric cameras. This problem is, of course, much larger for non-metric cameras because usually they do not have such devices. Other error sources, such as atmospheric refraction, affect metric and non-metric photography in the same manner. ♦

Repeatability:

In

order

to

utilize

predetermined

parameters in subsequent evaluations, it is essential that exactly

Chapter (2): Literature Review

7

the same exposure settings can be recreated for a number of exposures. For non-metric cameras, this is impossible, not only because of the already mentioned lack of stability, but also because there are no accurate measuring or setting devices that enable the operator to return to a previous setting. The use of non-metric cameras, as opposed to metric cameras, for photogrammetric purposes has the following advantages and disadvantages [ASPRS, 1989].



Advantages: 1.General availability, 2.Flexibility of focusing range, 3.Some are motor driven, allowing for a quick succession of photographs, 4.Can be hand-held and thereby oriented in any direction, and 5.The price is considerably less than for metric cameras.



Disadvantages: 1.The lenses are designed for high resolution at the expense of high distortion, 2.Instability of interior orientation, 3.Lack of fiducial marks 4.Absence of level bubbles and orientation provisions precludes the determination of exterior orientation before exposure. Concentrated research and development efforts, spearheaded by the

International Society for Photogrammetry and Remote Sensing (ISPRS), have

Chapter (2): Literature Review

8

resulted in a number of data-reduction approaches particularly suitable for nonmetric photography. These approaches are based on highly sophisticated analytical techniques which combine, in most cases, the calibration and evaluation phases. Through the use of such techniques, it is possible to reduce, or possibly completely eliminate, the effects of the disadvantages of non-metric cameras listed above [ASPRS, 1989]. The non-metric/computer evaluation combination has reached its fullest potential and accuracies reaching the photogrammetric noise level have been achieved. It often depends on the individual project whether the low cost camera expensive evaluation system or the metric approach is more suitable or financially advantageous, which leaves the decision to the user. Often project arrangements require versatility and light weight which can only be met by nonmetric cameras, and with the progress that has been made in the evaluation phase this option now can be a high precision approach. The photogrammetric potentials of non-metric cameras are indeed very high [Faig, 1982].

2 . 3 Non-Metric Camera Calibration The whole concept of non-metric camera calibration has to be expanded and applied to individual photographs. This has led to procedures that are commonly referred to as pre-calibration, on the-job calibration, and selfcalibration. The following are a brief summary on these methods [ASPRS, 1989]. ♦

Pre-calibration: represents the conventional calibration ap-

proach in a laboratory at test range. This is still important for non-

Chapter (2): Literature Review

9

metric cameras; however, it provides a partial solution only. Utilizing this approach, parameters of such effects as lens distortion that remain constant for a given lens or change according to a known pattern when changing the focus can be determined by pre-calibration. They are then treated as known quantities in further evaluations. ♦

On-the-job calibration: requires a sufficient number of known

control points around the object to perform a complete calibration using the actual object photography. In this case, calibration and evaluation are either combined into one process or carried out sequentially in much the same way as with metric photography. ♦

Self-calibration: utilizes the geometric strength of overlapping

photographs to determine the parameters of interior orientation plus distortions together with the object evaluation. Just like for exterior orientation, where this geometric strength is used to determine the parameters of relative orientation, and absolute orientation. There is no need for additional object-space control when interior orientation parameters are included in the mathematical modeling. This last concept appears to be most suitable for the evaluation of nonmetric photography. If a metric camera is used, under rather stable external condition, it is not necessary to vary the additional parameters for each photograph. This type of adjustment is commonly referred to as block- or stripinvariant.

Chapter (2): Literature Review

10

2.4 Image Coordinates Errors and it’s Refinement The systematic errors are the deviations of the physical imaging event from the mathematical model chosen to model the process, and they can typically be described by their own mathematical models. For example, the collinearity equations are based on the assumption that the light ray from the object point through the lens to the image point is a straight line. While this is true theoretically, any real lens has distortions that render this assumption invalid. To allow the use of a more manageable theoretical model, the effects of the lens distortions can be corrected in a preliminary step using a priori information about the distortions. The same procedure can be followed for other sources of systematic errors, using either calibration information about the specific equipment used or general mathematical models for such distortions as, for example, atmospheric refraction [ASPRS, 1989]. The corrections for the systematic errors are applied in an order opposite to that in which they occur. The light ray travels from the object point, through the atmosphere (or water), where it is refracted owing to changes in the density of the medium. When the ray reaches the lens it undergoes lens distortion, then reaches the image plane. If the image is formed on film, then the film may be distorted during the imaging or development process. Glass plates may also undergo this deformation, to a lesser extent. If an electronic camera is used, nonlinearities in scanning and other electronic effects may result in a geometrical deformation of the image. If a hard-copy image is used, it must be measured on a comparator, which will have its own errors associated with it. A philosophical and practical point is the determination of exactly which aspects of the imaging process to include in the general model and which aspects

Chapter (2): Literature Review

11

to treat as systematic errors and take care of in a preliminary step [Mikhail, 1985].

2 . 4 . 1 Image Deformation The term ‘image deformation’ refers to the movement of image points from their ideal image positions. This movement may be due to physical movements of the imaging medium, such as film or glass plates, to improper support of the imaging medium by the camera platen, or to electronic inaccuracies for cases in which the image is obtained from tube or solid-state video sensors. While the actual model for each case is different, the effects are very similar [El-Hakim, 1986]. Deformations occur in photos taken with non-metric cameras can be controlled by self-calibration of the camera. Corrections for film deformation in metric cameras are based on the availability of reference marks attached to the camera body which are imaged on the film and whose coordinates are known in the camera coordinates system, unaffected by any film deformation that may occur. These reference marks may be either fiducial marks or reseau marks, depending on the type of camera, the type of application, and the accuracy required from the solution. Fiducial marks are located either at the edges of the frame, at the mid-points of the sides, or in the corners, or preferably both. Reseau marks are distributed across the image, in a regular grid typically containing from 25 to 121 points. Since more reseau marks are available and they will be closer to any particular image point, the use of reseau marks provides a higher degree of accuracy. Balanced against this is the fact that

Chapter (2): Literature Review

12

cameras equipped with reseau marks are typically more expensive than cameras with fiducials only, and the data reduction is more expensive since more points must be measured. Fig.2.1illustrates a photograph taken with metric camera Rolleiflex 6006, showing 121 cross reseau on a 60 × 60 mm format [ASPRS, 1989].

Fig.2.1: A photograph taken with metric camera Rolleiflex 6006, showing 121 cross reseau on a 60 × 60 mm format [ASPRS, 1989].

The use of electronic imaging devices has introduced the necessity of studying a different type of image deformation. If video cameras with an image tube are used, then variations in beam scanning velocity and geometry can lead to deformations in the image. This geometric distortion is fairly consistent, if the camera is operated under steady-state conditions, and can be calibrated by imaging a test field or by inserting reseau marks into the video tube. Solid-state video cameras offer much greater metric accuracy and stability, although there

Chapter (2): Literature Review

13

still may be some distortions due to variations in synchronization and clock rate. Saturation of the photosensitive cells on the sensor may also result in geometric distortion of the image [ASPRS, 1989]. Fraser, 1997, used experimental results to highlight important characteristics of digital camera with self calibration. The quantization of the image intensity in a digital image may also produce image deformation. If an insufficient number of gray levels are used for the digitization, the apparent centroid of target points may appear to shift to the observer, having an effect equivalent to a geometric deformation of the image [ASPRS, 1989].

2 . 4 . 2 Lens Distortions Lens distortion is usually divided into two types, radial and decentering. As its name implies, radial distortion affects the position of image points on a straight line radially out from the principal point of the camera. It is also known as symmetric distortion, since it is a function only of radial distance and is the same at any angle around the principal point. Decentering distortion, caused by the improper manufacture of the lens, has both a tangential and a radial asymmetric component. Tangential distortion occurs in a parallel orientation at each point across the image and varies as a function of the radial distance and the orientation of the line from the point to the principal point with respect to a reference direction. The radial asymmetric component is added to the radial component [El-Hakim, 1986].

a. Radial Lens Distortion

Chapter (2): Literature Review

14

The radial distortion, δ r, of a lens focused at infinity can be written as a function of radial distance, r: δ r = K1r3 + K2 r5+ K3 r7+….

…………… (2.1)

in which, K1, K2 are the polynomial coefficients. If the distortions, δ rS1 and δ rS2, at two different object distances, S1, and S2 are known, then the distortion at any other object focus distance S can be calculated using the following formulae where, f is the focal length of the lens [Brown, 1971]: δ rs1 = K1s1r3 + K2s1r5+ K3s1r7 δ rs2 = K1s2r3+ K2s2r5+ K3s2r7

α5=

( S 2 − S ) ( S1 − f ) * ( S 2 − S1 ) ( S − f )

δ rs = K1sr3 + K2sr5+ K3sr7

K1s = α 5 K1s1 + (1 − α 5 ) K1s2 K2s = α 5 K2s1 + (1 − α 5 ) K2s2 K3s = α 5 K3s1 + (1 − α 5 ) K3s2

b. Decentering Distortion Decentering distortion is a more complicated problem than radial distortion since decentering involves both a tangential and a radial asymmetric component. The most commonly accepted model for decentering distortion is developed by Brown in (1966). The radial and tangential components, δ r and δ t are:

15

Chapter (2): Literature Review

δ

r

= 3P sin ( φ − φ 0 )

…………… (2.2)

δ

t

= P cos ( φ − φ 0 )

…………… (2.3)

Where, P is the value of the tangential distortion profile at the radial distance of the point, φ 0 is the angle between the axis of maximum tangential distortion and the x image axis, and φ is the angle between the line from the principal point to the point of interest and the x image axis (see Fig.2.2). These expressions can be modified to give the x and y components of the decentering distortion as follows: δ x = [P1(r2 + 2x2) + 2P2xy] [ 1+P3r2 + P4r4+…….] …………… (2.4) δ x = [2P1xy + P2(r2 + 2y2) ] [ 1+P3r2 + P4r4+…….] …………… (2.5)

Where, P1, P2, ..., are the polynomial coefficients and r is the radial distance from the principal point to the image point.

X Y r φ

φ0

Fig.2.2: Parameters that determine the amount of decentering lens distortion

Chapter (2): Literature Review

16

Chapter (2): Literature Review

17

2.5 Data Reduction It is a general engineering fact that lack in hardware capability has to be balanced by more sophisticated software and increased computational effort if similar results are being aspired to. Thus savings on one aspect of a project leads to additional costs on another. Measuring and reducing the data obtained from the photographs can be carried out using different approaches. These approaches are summarized in the following sections [ASPRS, 1989].

2.5.1 Analog Approaches The analog technique employs complex instrumentation to solve the photogrammetric problems. Typical of this method is the use of the stereoscopic plotting instruments for producing a topographic map. The stereoscopic plotting contain a complicated mechanical and optical parts. These parts represent the interface between the operator and the film photographs. Fig.2.3 shows the main parts of a stereo-plotting machine based on mechanical projection. Each of the two corresponding optical rays, which intersect to locate a point in the model, is replaced by a mechanical space rod. Hence a mechanical model is formed and measured [Farag, 1999]. An analog stereo-plotter is basically an analog computer at the read-only type, programmed for mathematical central perspective. While a number of them are further restricted to accept near-vertical aerial photography only, others can be used for terrestrial photography as well. Thus, analog evaluation of non-metric photography basically has to neglect all physical/geometric shortcomings of the camera. If, however, the required

Chapter (2): Literature Review

18

accuracy of the graphical end product (plan with contours or form lines) is acceptable, then analog evaluation is most desirable. If the accuracy level proves to be acceptable, other factors can also be neglected, because radial lens distortion represents the most prominent of the deviations from central perspective.

Fig 2.3 The main parts of a stereo-plotting machine after Farag, 1999

2.5.2 Analytical Approaches The Analytical System is, briefly, two main topics, the first is establishes projective relationships and the second is computer performs computation. Fig.2.4 shows the main components of an analytical plotter. The following are the main steps to operate the analytical method [ASPRS, 1989]: • Image coordinates of targets to be measured and sent immediately to computer

Chapter (2): Literature Review

19

• Computer reduces data either on-line or off line • Measurements are ‘monoscopic for discrete points’ or ‘Stereoscopic for nondiscrete points’.

Fig.2.4: Analytical stereoplotter [ASPRS, 1989] An ever-increasing use has been made of this approach in which measurements can be made either stereoscopically using analytical plotters or stereocomparators or monocularly using mono-comparator. Actually, there is no difference in accuracy between the stereoscopic and the monocular approaches in analytical photogrammetry. The difference between the two alternatives is primarily in the operational requirements. While the stereoscopic approach can be used for marked or unmarked stereo-photography, the monocular approach requires that all point images to be measured are marked permanently on the diapositives [ASPRS, 1989].

Chapter (2): Literature Review

20

2.5.3 Digital Approaches Digital photogrammetry is essentially a sequential process in which either the hard-copy photographs are first digitized or the images are directly acquired with digital cameras; then the digital data are processed in computers without human assistance. As such, digital photogrammetry involves the practice of using pixels and image processing techniques to arrive at geometric information. More details about the digital approach are given in section 2.7.

2.6 Mathematical Models: Problems in photogrammetry can be solved mainly by pure mathematical modeling based on precise measurements of x and y coordinates of image points [Farag, 1999]. As in any field, a number of mathematical models can be written to cover the same physical situation. The collinearity , the coplanarity , and the direct linear transformations are mathematical methods in the photogrammetric technique which will be described briefly in the following sections [Mikail, 1985].

The Collinearity Condition The most straightforward way to model any physical system is to determine and describe the components involved, and, then, mathematically express the relationships between them. This is the approach of the collinearity method, a mathematical expression of the physical fact that the object point, the perspective center, and the image point ideally lie on a straight line (see Fig.2.5.).

21

Chapter (2): Literature Review

Multiplying the object space vector by the rotation matrix, M, to bring it into the image space coordinate system. The unknown scale factor, k, is then included, giving: a=kMA

…………..(2.6)

Where, a

is the image vector, expressed in the image coordinate system

A

is the vector from the perspective center to the point, expressed in

the object space coordinate system

Fig.2.5: The collinearity condition

This yields three equations that describe the physical situation. Since the scale factor k is of no interest in itself, it can be eliminated by dividing the first and second equations by the third and rearranging to yield a functional form. This gives the most commonly used form of the collinearity equations referred later as (2.12 & 2.13).

22

Chapter (2): Literature Review

The Coplanarity Condition The coplanarity equation can be thought of as an extension of the collinearity equation, in that where the collinearity equation says that the object point, the perspective center, and the image point all lie on a straight line, the coplanarity equation says that when two photos image the same point, the two image rays, from the object point to each image, determine a plane (see Fig.2.6).

Fig.2.6: The coplanarity condition

Equation (2.7) represents this condition. The base vector B, and [x1 ◌َ , y1 ◌َ , z1 ◌َ ], [x2 ◌َ , y2 ◌َ , z2 ◌َ ] are the two image vectors. So for the three vectors to lie in a plane, the parallelopiped formed by the three vectors must be zero. This written in the form of a determinant:

23

Chapter (2): Literature Review

⎡B x ⎢ ⎢ x1′ ⎢ x′ ⎣ 2

By y 1′ y ′2

Bz ⎤ ⎥ z 1′ ⎥ = 0 z ′2 ⎥⎦

……………(2.7)

Equation (2.7) contains 12 unknowns for each of the two photos, three coordinates and three orientation angles, assuming that the interior orientation is known.

The Direct Linear Transformation An alternate formulation of the analytical orientation problem is the Direct Linear Transform, or DLT, as originally proposed by Abdel-Aziz and Karara in (1971). The main advantages of the method are that it does not require a calibrated camera, or one with fiducial marks, and can be solved without supplying initial approximations for the parameters. By including the transformation from the photo comparator to the image space in the equations, the method essentially solves for the object space coordinates directly in terms of the comparator coordinates. One derivation of the DLT is to begin with the collinearity equations. The affine transformation from the comparator to the image coordinates is incorporated in the collinearity equations along with the correction for lens distortion, leaving the collinearity equations in the form: x − δ x = x p − cx

m11 ( X − X C ) + m12 (Y − YC ) + m13 ( Z − Z C ) …………...(2.8) m31 ( X − X C ) + m32 (Y − YC ) + m33 ( Z − Z C )

y − δ y = y p − cy

m 21 ( X − X C ) + m 22 (Y − YC ) + m 23 ( Z − Z C ) ………….(2.9) m31 ( X − X C ) + m32 (Y − YC ) + m33 ( Z − Z C )

Where x and y are the image coordinates in the comparator coordinate

24

Chapter (2): Literature Review

system,

δ

x

and δ

y

, are the systematic errors of the image coordinates (in this

case the lens distortion), xp and yp are the coordinates of the principal point in the comparator coordinate system, and cx and cy are the principal distances in the x and y directions. The differing principal distances in the x and y coordinate directions come about because of the affine scaling used in the transformation from the comparator coordinate system to the image coordinate system.

Photogrammetric Approach with the Bundle Method In photogrammetry, the basic computational unit is the bundle of rays that originate at the exposure station and pass through the image points. Bundle block adjustment means the simultaneous least squares adjustment of all bundles from all exposure stations to all measured image points, as well as the simultaneous recovery of the orientation elements of all the photographs and the adjustment of the object points [ASPRS, 1989]. The fundamental equation describing the bundle in both image and object spaces, often called “single photo orientation” is: ⎡ xi ⎤ ⎡ xo ⎤ ⎢ y ⎥ − ⎢ y ⎥ = ⎢ i⎥ ⎢ o⎥ ⎢⎣ 0 ⎥⎦ ⎢⎣ c ⎥⎦

⎧ ⎪ λ iR ⎨ ⎪ ⎩

⎡X ⎢Y ⎢ ⎢⎣ Z

i i i

⎡X ⎤ ⎥ − ⎢Y ⎢ ⎥ ⎢⎣ Z ⎥⎦

o o o

⎤ ⎥ ⎥ ⎥⎦

⎫ ⎪ ⎬ … … (2.10) ⎪ ⎭

Where,

(x i ,

y i ,0 )

(x 0 ,

y0,c

T

)T

is the vector of measured photo coordinates of point i, are the photo coordinates of thc exposure station,

25

Chapter (2): Literature Review

λi

is the scale factor of point i (i.e. the length difference between

the vectors along one ray in image and object spaces), is a 3 × 3 rotational matrix defining the space rotation of the

R

ground system with respect to the photo system,

(X i , Y i , Z i )T

are the object-space coordinates of the point i, and

(X 0 , Y 0 , Z 0 )T

are the object-space coordinates of the exposure station.

Being faced with an individual scale factor for each point in each bundle is rather awkward. Therefore, λ i is eliminated by dividing the first and second equations respectively by the third one; with: ⎡ m11 R = ⎢⎢m21 ⎢⎣ m31

m12 m22 m32

m13 ⎤ m23 ⎥⎥ …………..(2.11) m33 ⎥⎦

We thus obtain the collinearity equations (2.12 & 2.13) which rewritten here as: Fx = x i − x 0 + c

m11 ( X i − X 0 ) + m12 (Yi − Y0 ) + m13 ( Z i − Z 0 ) = 0 …………..(2.12) m31 ( X i − X 0 ) + m32 (Yi − Y0 ) + m33 ( Z i − Z 0 )

Fy = y i − y 0 + c

m 21 ( X i − X 0 ) + m 22 (Yi − Y0 ) + m 23 ( Z i − Z 0 ) = 0 …………..(2.13) m31 ( X i − X 0 ) + m32 (Yi − Y0 ) + m33 ( Z i − Z 0 )

These fundamental equations of analytical photogrammetry simply state that image point, exposure center, and object point are located along a straight line. These equations contain six unknowns per bundle, namely the object-space coordinates (X 0 , Y 0 , Z 0 )T of the exposure station and the three rotation angles ( ω , φ , κ ) which are implicit in the rotational matrix R. In the basic bundle approach x 0 ,

y

0

and c are considered to be known (e.g., from camera

calibration); thus a minimum of three object-space control points are required

Chapter (2): Literature Review

26

(together with the geometry to solve for the six unknowns per bundle), which are then used to compute the unknown object space coordinates of other measured image points. Since the measured photo coordinates contain random errors,

Fx

and

Fy

may differ from zero. Thus the collinearity condition has to be extended in order to included such random errors in the adjustment. The collinearity equations are non-linear and need to be linearized by Taylor expansion at an approximate values. This then leads to an iterative solution. The observed quantities are image coordinates or photo coordinates. If there are deviations from the theoretical central perspective whose amounts are known (e.g., lens distortion as determined by camera calibration), then appropriate corrections can be applied to the photo coordinates prior to the bundle adjustment. This process is called image refinement. Unfortunately, these deviations are usually unknown for non-metric cameras and normally change from exposure to exposure. Thus image refinement is not effective and the mathematical model has to be improved by introducing “additional parameters” into the collinearity equations, which can be expressed as follows: x − x 0 + Δx p = − c

m11 ( X i − X 0 ) + m12 (Yi − Y0 ) + m13 ( Z i − Z 0 ) ………… ..(2.14) m 31 ( X i − X 0 ) + m 32 (Yi − Y0 ) + m 33 ( Z i − Z 0 )

y − y 0 + Δy p = − c

m 21 ( X i − X 0 ) + m 22 (Yi − Y0 ) + m 23 ( Z i − Z 0 ) ………… ..(2.15) m 31 ( X i − X 0 ) + m 32 (Yi − Y0 ) + m 33 ( Z i − Z 0 )

Where Δ x p , Δ y p are functions of several unknown parameters and are adjusted simultaneously with the other unknowns in the equations. Under certain conditions, a complete recovery of all parameters, including

Chapter (2): Literature Review

27

x0 , y0 and c, is possible without the requirement for additional ground control

points. Therefore this approach is often referred to as ‘self-calibration’ [ASPRS, 1989]. Two general principles have to be considered when using additional parameters, namely: • The number of parameters should be as small as feasible to avoid overparameterization and to keep the additional computational effort small. •The parameters are to be chosen such that their correlations with the other unknowns are small. Otherwise the normal equation matrix becomes ill-conditioned or singular.

Chapter (2): Literature Review

28

2.7 Digital Photogrammetry Digital has become the universal catchword of the last decade in many disciplines especially in photogrammetry. Digital photogrammetry can be defined as a technology which may be called as pixel photogrammetry or soft copy photogrammetry. Digital or soft-copy workstations are well on their way to replacing the analytical instruments of days past. As photogrammetric technique moves from analytical to digital methodology, it allows automatic production of data, and speeding up productions [Hasi, 1999]. Digital workstations are becoming a regular fixture in modern photogrammetry. The potential of the digital takeover is perhaps best illustrated by the multitude of softcopy providers, by the diversity of the systems they offer and by the promises they make [James, 1999]. Digital photogrammetry might supply GIS in a flexible way with several up-to-date major landbase layers like Digital Terrain Models (DTM) Digital Orthophoto’s and Digital Topographic Databases. With the advent of GIS, photogrammetry has become only one of the components of GIS, but certainly one of the most important ones. Based on a uniform georeference, GIS serves as an integrator between topographical and thematic information [ISPRS, 1992]. The success of digital systems will only be fulfilled if it can be seen that the financial outlay will be recompensed by an improvement in workflow, both in time and efficiency [Martin et al, 1996]. Capanni, et al, 1996, added that digital systems have had recently a large diffusion on the market for a better operative flexibility, for the possibility of extending the field of application of the photogrammetric restitution and finally because they can better integrate the photogrammetric plotters with geographical

Chapter (2): Literature Review

29

information systems. An over view on the digital image and its use in photogrammetry, the resolution of an image, the digital image acquisition, the digital image data processing, the advantages of digital photogrammetry, and the fully digital photogrammetric systems will presented in the following next sections.

2.7.1 Digital Image An image in the world of computers is a collection of dots called ‘ pixels’. These pixels are arranged in rows to form the image. For the simplest data type, the pixels can only be black or white. For the most powerful data type, each pixel can be any one of 16.7 million colors. The most distinguishing characteristic of a digital image, as compared with a continuous-tone hard-copy image, is that it is formed by a set of discrete, very small, but finite, picture elements, or pixels. Each pixel is essentially a square area (e.g., 25×25 micro-meters ( μ m)) of uniform photo density. Therefore, image measurements are often expressed in terms of pixels and fractions [ASPRS, 1989]. As long as the pixels are small enough, one can not see them as individual dots: groups of pixels, which together form areas of color or gray will only be seen, [Manual of U- Lead System’s iphoto Deluxe, 1992]. To obtain a digital representation of image data a two-dimensional grid of points covering the area of interest is selected and a numerical values is assigned to the data for each point. The range of the numerical values and the spacing of the grid points are so chosen as to ensure that the data are represented with the

Chapter (2): Literature Review

30

necessary fidelity. The differences between digital imagery and film imagery are due to the quantized nature of the digital image, both spatially and in intensity. An image on film can be thought of as continuous down to the level of the individual grains in the emulsion, while a digital image consists of a set of numbers representing the average intensity values of a regular grid of rectangles, or pixels [Helava et al, 1972]. A digital image is stored on the computer as numbers located in (x,y) positions i.e. the colors coded in numbers. Image data typically take the form of a photograph a two dimensional area in which each point has a particular density and color. Digital images are used in photogrammetry in two different ways. The increasing power of inexpensive processing systems allows the realization of packages for geometrical and radiometrical image transformation. This last possibility allows the fulfillment of softcopy systems for producing numerical orthoimages. The original image is not used directly for measuring but is drawn up for getting a new image geometrically correct. The second method uses digital Images for 3-D measurements in a fully similar way as images on film are used in an analytical stereoplotter, i.e. as primary source for generating pairs of plate coordinates which, suitably processed, are getting the 3-D coordinates of the mathematical point. Progress towards the adoption of digital photogrammetric techniques employs in map revision process [Newby, 1996].

Chapter (2): Literature Review

31

2.7.2 Digital Image Resolution Resolution can be defined as the minimum distance between two adjacent features, or the minimum size of a feature, which can be detected by an imagining system. For photography, this distance is usually expressed in lines per millimeter recorded on a particular film under specified conditions. If expressed in size of objects or distances on the ground, the distance is termed ground resolution [ASP, 1980]. One can say that resolution is the number of dots available to represent graphic detail in a given area: on a computer screen, the number of pixels per linear inch; on a printer, the number of dots printed in a linear inch; on a scanner, the number of pixels sampled per linear inch of the scanned image.

2.7.3 Digital Image Acquisition: There are two main methods to obtain digital photos, either after some processing (scanners) or directly (digital camera). These two methods are discussed in the following two sections.

Scanners: Capanni and Flamigni, 1996, classify the scanners in two categories: • High precision photogrammetric devices • Commercial scanners for publishing Photogrammetric scanners are characterized by high geometric accuracy, high resolution (over 2500 dpi) and the possibility to choose the scanning

Chapter (2): Literature Review

32

direction to align it with the one identified by the fiducial marks. There is a rich variety of photogrammetric scanners on the market-place, which are capable of geometric accuracy similar to or just below those of analytical plotters, i.e. root mean square errors of 2-10 μ m per axis. These can produce radiometrically satisfactory data in 8 or 10 bit grayscale or 24 bit colour at raw pixel sizes between 3 and around 20 μ m. For users whose requirements are less stringent, especially whose work may consist mainly of medium and small scale topographic mapping, several scanners from the desk top (which can accommodate 23×23 cm format) publishing world have been found useful. They give satisfactory results in the 20-50 μ m range. For some applications an attempt should be made to compensate for systematic geometric errors in order to achieve the required accuracy. A system resolution of 60lp/mm from an aerial camera is equivalent to around 8 μ m. In theory a higher scanning resolution, perhaps 6 μ m, is required to capture this, but users seem to have settled on values of 10-30 μ m for all except intelligence applications. At 25 μ m, a grayscale image occupies 81 MB, at 10 μ m, about 0.5 GB. Since DPWs usually work on two or more images simultaneously [Farag, 1999]. These devices are very expensive and also the working costs are elevated. The costs of the scanners can reduce the diffusion of the digital systems, so the image scanned with a publishing device can be used for photogrammetric application with good accuracy results [Capanni and Flamigni, 1996].

Chapter (2): Literature Review

33

Digital cameras: Digital cameras use a camera body and lens but record image data with CCDs (charge-coupled device) rather than film. In turn, the electrical signals generated by these detectors are stored digitally, typically using media such as computer disks. While this process is not ‘photography’ in the traditional sense ‘images are not recorded directly onto photographic film’, it is often referred to as ‘digital photography’. Two-dimensional array sizes of digital cameras typically range from 512×512 pixels to 2048×2048 pixels or more. A typical solid-state CCD camera, used in many industrial applications, will have up to 512 pixels in each coordinate direction. The gray level resolution of the image depends upon the number of digitization levels used on the analog signal from the camera. A common value is the use of 256 such gray levels, [ASPRS, 1989]. A modern Kodak Professional CS 200 digital camera illustrates in fig2.7, uses a 1524 column × 1012 row pixel CCD array, with each sensor in the array being 9×9 μ m in size. Exposure times as short as 1/8000 second are available. Images can be either black and white or color.

Figure 2.7 Kodak Professional CS 200 digital camera [after Farag, 1999]

Chapter (2): Literature Review

34

There is also a metric cameras available in the photogrammetric market such as Rolleiflex 6008 metric camera which can take a bulk film magazine or digital back. The camera can deliver software solutions running under windows 95/98/NT for most photogrammetry assignments in research, development and manufacturing. Digital cameras are rather seldom used in aerial photography, but are quite common in close range applications. Formats of 2000×2000 pixels are still quite large: these would give 4 MB per image grayscale, 12 MB colour [Farag, 1999]. Digital photography offers numerous advantages, including such very important ones as lower costs, faster turnaround and greater control over images. But many potential adopters believe that digital photography does not offer the same quality as a conventional photograph, and there’s some truth to this, even though it’s also part perception. Film is still finer-grained in continuous tone images but it has become impossible for almost anyone to tell in all but the most resolution-intensive applications. At the present time, video cameras are being used as an acquisition system. As close-range photogrammetry has been widely applied for static deformation analysis, video cameras have many characteristics that make them the sensors of choice for dynamic analysis of rapidly changing situations [Lee, 1999].

2.7.4 Digital Image-Data Processing: With the emergence of modern digital computers, the application of digital techniques to the processing and analysis of image data has become increasingly feasible. For example the digital image processing techniques on the digital image can be used to adjust the contrast and enhance the edges. Usually very

Chapter (2): Literature Review

35

flexible and noise-free, digital processing can be used to implement image-data preprocessing, shaping, and matching, as well as pattern recognition, information derivation, and feedback generation. Since large volumes of data are involved in most applications, processing speed is of prime practical importance. New and specialized digital techniques, parallel processing and microprogramming open the way to the design of special image processors that have the necessary speed and/or a high degree of processing sophistication. The best of instruments will represent a proper balance between performance ‘speed and accuracy', flexibility, and cost [ASPRS, 1989].

2.7.5 Advantages of Digital Photogrammetry The advantages of digital photogrammetric systems have been summarized by Gruen, 1996, as: 1. Increasing accuracy. 2. Reduced equipment cost. 3. Increased throughput (automation). 4. Less qualified operators (simple user interface, no or little stereoviewing) 5. Faster availability of results (on-line and real-time processing). 6. Fast image transmission (use of digital images). 7. Better quality and more flexible products (combined, hybrid data). 8. Better data integration; joint platforms with CAD/GIS. 9. New type of products (image based, visualization, animation).

Chapter (2): Literature Review

36

2.7.6 Fully Digital Photogrammetric Systems: The first commercial fully digital photogrammetric stereostation was presented as DSP1 by KERN & Co. AG at the XVlth congress of the ISPRS in Kyoto, 1988. This system had a number of military- and university-based predecessors. By 1992 digital photogrammetric workstations (DPWs) had begin their migration from military applications into the commercial market-place. The major characteristics of digital photogrammetric systems as mentioned by Gruen, 1996, are: • The system combines computer hardware and application software to allow photogrammetric operations to be carried out on digital image data. • The sets of digital image data consist of arrays of picture elements (pixels) of fixed size and shape; each pixel has one (black & white image). • The sensor array produce digital data, for example a digital camera incorporating an aerial array of CCDs, or a pushbroom scanner with a linear array of CCDs. • Data is often derived from a camera producing frame images on photographic film; these are converted into digital form using high precision scanners. • The main element of the system is the DPW on which the required mathematically based photogrammetric operations are carried out to produce data for input to digital mapping, CAD or GIS/LIS systems; • These operations are performed manually, semi-automatic or fully

Chapter (2): Literature Review

37

automatic, for example most feature extraction, editing, and DEMs and orthophotos generation are using automated or semi-automated methods. • Final output may take the form of vector line maps, data files or image maps; thus many systems include raster plotters or film writers.

Digital Photogrammetric Workstations: Many commercial digital workstations, which have good accuracy and easy method of use, are available in the photogrametric market such as: • Leica digital photogrametric workstations: DSW200, DSW500, DSW600series (mono), DSW700series (stereo), and the DVP workstation, [www. Leica.com]. • Ziess company: PHODIS- ST Digital workstation, 1996. • DAT/EM Systems: SUMMIT PCTM , [www.datem.com]. • INTERGRAPH: Image station ZII and Image station SSK, [www.intergraph.com]. • LH systems: SOCET SET ® V4.2, 1999 [www.IH-systems.com]. As an example of the photogrammetric workstations the DVP workstation is shown in Fig.2.8. This work station is a fully integrated digital photogrammetric system that has been especially designed for efficient threedimensional data collection. Using commercially available, DOS-operated standard personal computers with desktop scanners. Selection of the appropriate scanning resolution results in data that may not be ‘as precise as possible’, but ‘as precise as necessary’. The system has been designed to export and import

Chapter (2): Literature Review

38

data to and from existing CAD or editing systems. Accuracy: For a given photo scale and geometry, the accuracy is a direct function of the scanning resolution and reliability. The established accuracy level is of the order of 70 percent of the pixel size. The scanning resolution can be adjusted to obtain the accuracy required at minimum scanning times and file sizes. The DVP is equally equipped to handle close-range applications, especially in the field of architecture [Leica AG, 1996].

Fig.2.8:The DVP Leica work station after Leica AG, 1996

The digital photogrammetric workstations perform the functions of an analytical plotter, and many more tasks besides based on digital images with the following main advantages [Mathias, 1991]: 1.

More functions and more products

2.

No repetition of interior or exterior orientation

3.

All the expensive, high precision, mechanical components of the AP are

eliminated in the DPW, such as, the measuring components such as encoders; also, in most cases there are no optical components of the type found in

Chapter (2): Literature Review

39

analytical plotters. 4.

Development benefits from advances in computer technology.

5.

Easy hardware maintenance using computer manufacturers service

networks. 6.

Provide greater simplicity.

Along with all of these advantages there are a series of disadvantages; such as: 1. Additional cost of high resolution scanner. 2. Poorer image quality than that traditionally provided by film imagery and optical trains. 3. Less smooth image roaming than analytical plotters, despite expensive graphics subsystems.

Software: Software technology has led to the ability to automatically identify control points, generate digital elevation models and produce orthophoto products, leading to a potential efficiency revolution in the photogrammetric workplace [Martin, 1996]. A group of software systems have been specially designed for non-metric photography are available in the photogrammetric market which have a variety of analytical approaches and suitable for special applications. Others designed for updating stereoplotters. Some of these systems are listed below: • CRABS (Close-Range Analytical Bundle Solution), utilizes additional parameters, was developed by Kenefick(1977) for industrial measurements.

40

Chapter (2): Literature Review

• CRISP (Close-Range Image Set-up Program), developed in Graz, Austria [Fuchs and Leberal, 1984], for use in conjunction with the Kern DSR-1 Analytical plotter, utilizes the DLT approach for it’s plate processor program. • GENTRI (General Triangulation System), a self-calibrating bundle adjustment with data snooping, was developed by Larsson (1983) for close-range photogrammetry as well as for aerial blocks. • MOR (Multi image Orientation), developed by Wester – Ebinghaus(1985) as a block-invariant, self-calibrating bundle method, simultaneously adjusting geodetic and photogrametric observations. • SAPOCR

(Simultaneous

Adjustment

of

Photogrametric

Observations for Close-Range Applications), a modified version of SAPGO (Simultaneous Adjustment of Photogrametric and Geodetic

Observations)[Wong

and

Elphingstone,

1972],

developed at the University of Illinois. • UNBASC(University of New Brunswick, Analytical Self – Calibration Program),developed by Moniwa(1976) several versions as a photo-variant bundle block adjustment with additional parameters. • STARS (Simultaneous Triangulation and Resection System), it is a turnkey system for close range photogrammetry, developed by Brown, 1982 and modified by Fraser and Brown,1986. • GEBAT-V (General Bundle Adjustment Triangulation -Photo

Chapter (2): Literature Review

41

Variant), it was developed at the university of New Brunswick in Canada [El-Hakim, 1979; El-Hakim and Faig, 1981] and modified at the Canadian National Research Council [El-Hakim, 1982]. It is designed to achieve the highest possible accuracy by reducing the effect of systematic errors, which is particularly critical when using non metric cameras. • BSC (Bundle with Self-Calibration), developed by Kurt Novak, 1991, Dubline, Ohio. The program computed accurate coordinates of points from measurements in photographic images. Some constraints can be included such as distances measured between object points or perspective centers. • BINGO (Bundle Adjustment for Engineering Applications), developed at the University of Hannover in West Germany [Kruch, 1984]. The program is a bundle adjustment with 24 additional parameters. • DAT/EM Systems International’s, mapping software has been updated to interface with stereoplotters from ADAM Technologies. This new update allows the ADAM plotters to collect 3D data directly into the world’s two most popular CAD packages [www.datem.com]. • ABC32, 1999, this software is used for windows95 to upgrading the stereoplotters with various types (Intergraph, Zeiss, Wild, Kern, and Helava), developed by ABC software developers,San Rafeal, E.mail:[email protected]. • ERDAS, provides range of geographic imagining tools for

Chapter (2): Literature Review

42

extracting information from imagery, several versions as Feature Finder, Arc view®, and imagining® 8.4.[www.erdas.com]. • Idrisi 32, it is an image processing system to most desktop computers, [www.clarklabs.org]. • Visual Giant 4.0, used for bundle adjustment aerotriangulation system, developed by Giant series, [www.elasoft.com]. The features and principle of functioning of one such digital photogrametric software “Delta” are illustrated here [Malov, 1996]. The software provides inner, relative (auto with correlation) and absolute orientation of photos on computer’s monitor using lens stereoscopic unit or stereo-eye-glasses. It helps in making digital mapping and making of orthoimages from air photos and space photos. “Delta” software supports all the principal procedures of stereoscopic processing, such as: automatic orientation of model, collecting topographic data, semantic encoding of entity, etc. Technology of making digital maps by means of digital photogrammetric stations is based on stereoscopic sight of digital image of stereopair on display screen. A library of functions automating accomplishment of several processes, such as: restoring circles, arcs, parallel lines, orthogonizing outlines of buildings, smoothing any horizontals out, etc., has been developed for reduction of laborious of obtaining digital terrain model (DTM). “Delta” is used for the solution of the following technological tasks: •

Making and revising digital maps for all the scale series;



Making plans for terrain cadastre;



Making architectural and mine-surveying plans on the basis of:

Chapter (2): Literature Review

43

1.Ground phototheodolite photographs; 2.Geodetic network densification by analytical photogrammetric triangulation.

Chapter (2): Literature Review

44

2.8 Factors Affecting the Accuracy of the Photogrammetric Systems Where close-range photogrammetry is employed as a tool for precise 3-D measurements, the parameters of primary interest in a system calibration is usually the accuracy of the coordinates of the object target points [Babaei, 1981]. The factors affecting the accuracy of object space coordinates are two main types, factors affecting the traditional photogrammetry and factors affecting the digital photogrammetry. The factors affecting the accuracy of the traditional photogrammetric systems are also affecting the digital photogrammetric systems. But, some special factors related to computers and scanners affect the digital photogrammetric systems only. In the following two sections the factors affecting the accuracy of the traditional photogrammetric systems and the digital photogrammetric systems will be discussed.

2.8.1 The Factors Affecting the Traditional Photogrammetric Systems:

Geometric factors: Hottier, 1976, mentioned that mainly the geometric factors affecting the accuracy of the photogrammetric systems are: 1. Ratio of the base to the mean object distance (B/H) 2. Camera axis convergence 3. Number of control points

Number of overlapping photos:

Chapter (2): Literature Review

45

El-beik and Mahani, 1984, concluded that the accuracy is more homogeneous in all coordinate axis when using the quadrustational system (using four station exposures) as compared with the two station of system; and the system (quadrustational) produces higher accuracy in the Z-direction, which is of paramount importance to most engineering measurement problems.

Human errors: Abdel-Aziz, l982, reported some other sources of errors affecting the accuracy of the calculated object space coordinates. These sources are: a)

Operator error in setting the inner orientation parameters.

b)

Operator error in measuring the model-coordinates. These human factors have been studied by a number of investigators,

among them are Hou (1975), Galanakis (1977), Babaei (1981), Opitz et al (1982) and Schwartz (1982). Residual errors: A major factor limiting photogrammetric accuracy is the effect of residual errors that displace the photographic image from its ideal (projective) position [Marks, and Asce, 1976]. Mathematical statement of the assumed straight line path of alight ray from an exposed object through the camera lens to its photographic image is commonly achieved through use of the collinearity equations. Differences between the position assumed by the photogrammetric solution result in a corresponding degradation of the solution .The cause of these differences may be divided in to three general categories: 1. Bending or deflection of the image ray due to the media through which it is passing at the time of the photographic exposure.

Chapter (2): Literature Review

46

2. Changes in image position on the film from the theoretical flat focal plane position at the time of photographic exposure to the actual image position at the time of image photographic image coordinate measurement. 3. Errors in image coordinates measurement process. Bending or deflection of the image rays at the time of exposure may be due to lens distortion, atmospheric refraction, camera window distortion, and turbulence in front of the camera window. Change in image position on the film between the theoretical position at the time of exposure and the actual position at the time of measurement may be due to lack of film or platen flatness at time of exposure and film deformation occurring between the time of exposure and measurement. Errors in the image measurement process may result from the measuring comparator, Point transfer operations and the inherent measurement abilities of the observer. Mathematical model: Mahajan, and Singh, 1972, made a comparison between five methods for analytical relative-orientation. The results presented and analyzed reveal that the coplanarity method is the most precise. The collinearity methods is the next best. The collinearity and coplanarity methods are less time consuming and converge rapidly, the correction equations are simple, the mathematical operations involved are minimum and precision is high. Therefore, they are the most economical and efficient. The nature of the image coordinate errors is predominately systematic within each photograph and from photograph to photograph. The optimum photogrammetric solution occurs when systematic errors are totally removed and the random errors are minimized. The ability of the photogrammetric solution to

Chapter (2): Literature Review

47

remove systematic errors also depends on the type of the solution used (Marks, and Asce, 1976).

The resolving power of a photographic emulsion: The resolving power of a photographic emulsion is a factor must be considered in the digital photogrammetric systems which refers to the number of alternating bars and spaces of equal width which can be recorded as visually separate elements in the space of one millimetre, on the emulsion and provides an indication of the ability of the emulsion to provide distinct images of small, closely spaced objects. In photography the combination of a bar and a space is referred to as aa line or line pair, and resolving power is specified in lines/mm (1/mm) or lines pairs/mm (l pr/mm). Thus, a film with a resolving power of 30 lpr/mm will record a pattern consisting of 30 bar and space pairs, each having a width of 1/30 mm, and the observer theoretically will be able to distinguish 60 linear elements ‘30 bars and 30 spaces’ per millimeter when the image is viewed under suitable magnification. Thus, a photographic film with a resolving power of 30 l pr/mm would, in terminology of monitors, have a resolving power of 60 l/mm. In the United States, resolving power of an emulsion is commonly evaluated under laboratory conditions by photographing standard three-bar resolution targets through a large-aperture, diffraction-limited microscope objective. The image is then visually examined under magnification to determine the number of 1pr/mm resolved [ASP, 1980].

Chapter (2): Literature Review

48

2.8.2 The Factors Affecting the Digital Photogrammetric Systems: The factors affecting the photogrammetric measurements using digital images have been mentioned in a number of technical papers and reports. The main theme of most of the research published in this field is the study of the potential of automated pointing and measurement techniques. The following are the factors which may affect the accuracy of using a digital image in the measurements of object space coordinates:

The scanner and the scanning system: This includes the size of the pixel [Driscoll et al, 1974, Thurgood and Mikhail, 1982, Mikhail, 1983, Ackerinan, l983, and Mohamed, 1985] and the adjustment of the scanning direction in relation to the fiducial axes [Crombie, 1976, Wolf et al, 1982]. At all it can be said that the accuracy of the data collected by a digital system depends mainly on the precision of the scanner [Capanni and Flamigni, 1996]. More details about types of scanners are given in section (2.7.4).

Resolution: The accuracy and precision obtainable from a digital image depends upon the spatial quantization, or pixel size. All other things being equal, the smaller the pixels are, the better the accuracy and precision should be, in a linear relationship. It has also been shown that the precision with which an edge in a digital image can be located depends upon the intensity quantization as well as upon the spatial quantization of the image. The highest accuracy can be obtained

Chapter (2): Literature Review

49

from the highest possible resolution of image scanning [Hanke, 1997]. Ebrahiem, 1999, concluded that the accuracy is increasing with the increase of the resolution up to 150 dpi and then the improvement of the accuracy is not remarkable ‘the obtained accuracy is 0.6cm’. Tao, 1999, uses sub pixel measurements and get accuracy of 0.3 pixel to the computed coordinates. Huang, 1999, have enabled the measurement of the objects to an accuracy of less than 0.5 pixel. This factor is analyzed in details in this study.

Screening system: The resolving power of monitors and similar instruments is normally defined in terms of the number of individual elements at the resolution limit [ASP, 1980]. The accuracy achieved is a function of the procedure of the display system, the hardware and software available, the dimension of the screen (the maximum number of pixels to be displayed across the screen), the range of colour facilities [Mannos, 1980, Lemmer, 1982], digitizing configuration, stereo techniques [Jepsen, 1976, Schrock, 1980], and degree of enlargement of the display [Gambino and Crombie, 1979].

Edge enhancement: The application of most edge enhancement operators tends to cause a shift in edge location. Any operation, such as histogram equalization, that affects the gray levels of the image can have an effect on the precision of the imagery [ASPRS, 1989].

Chapter (2): Literature Review

50

Measuring system: The factors affecting are the method of measuring image coordinates, mono or stereo, and using softcopy or hardcopy digital images, Opitz et al, 1982, the control points used to calculate the image coordinates [Wolf and Storey, 1982], the number of observations for each measurement, computational procedure, interactive facilities [Norvelle, 1981], and User-friendliness [Bracken et al, 1978, and Opitz et al, 1982].

Other factors:

Some other factors which may affect the accuracy of the ground coordinates are the type of the film, handling the negative during the processing, resolution of printing material, type of printing machine, scale of the photos and the focal length of the camera. The last factor is studied by Tao, 1999, using CCD video camera and found that a short focal length will increase the errors of lens distortions while it can provide a wide field of view. Beyer, 1992, concluded that the accuracy limited by the local illumination gradients, the non-uniform background, the small image scale, and the large difference thereof. Local illumination gradients could be eliminated by the use of retro-reflective targets. The non-uniform background can be improved by designing a larger area around the targets. The image scale/target size in the imagery, i.e. the number of pixels onto which a target is imaged, can be improved by using larger targets and/or a camera with higher ‘resolution’. The first option is not always practical. The target diameter of 20 mm across a 2.6 m large object is already quite large for practical applications. This can in fact be improved with retro targets as their return is much better and scattering significantly increases the apparent diameter in the imagery. A high ‘resolution’ camera, i.e. a camera delivering over 1024×1024 pixels, would also reduce the

Chapter (2): Literature Review

51

effects of local illumination gradients and non-uniform background as the area surrounding the target affecting the target localization is smaller. Finally the large variation of image scale could be reduced when using a longer focal length. Light, 1999, summarized the factors which affect the accuracy of the contour lines as: the plotting instrument, the nature of the terrain, the camera and its calibration, the resolution quality of the photography, the density and accuracy of the ground control, and the capability of the plotter operator.

Chapter (2): Literature Review

52

2.9 Review of some Engineering Applications The fast developments of software and hardware are stimulating the market with a series of potential photogrammetric applications in the architectural and heritage fields. In close range photogrammetry, the range of the object-tocamera distance is limited. Some advocate 300 meters as a maximum limit, while the minimum distance is essentially zero (say a fraction of millimeter), to encompass macroscopic and microscopic photographs [ASPRS, 1989]. The followings are some close range photogrammetric applications applied in the last ten years. - Astori, 1992, carried out a series of tests in the field of architecture with digital image processing programs like Orthomap and Archis marketed by Galileo Siscam, together with the necessary hardware. Two different experiences were made, the first was carried out with a part of the survey of the sanctuary’s interior with the program Orthomap, a total of six colour photographs have been used, which were taken with a Rollei 6006 camera (format 6x6 cm), which correspond to a single view of the sanctuary’s interior, and have been used for the assembly of a longitudinal section, two plots were made of the interior with a Wild BC3 stereoplotter, in order to represent the major architectural lines. While the second represent the use of the program Archis for the processing of the photographs taken of the sanctuary’s exterior. Four colour photographs were used, which were taken with the same camera and cover the sanctuary’s exterior. These photographs have been used for the plotting of views of the exterior with the Wild BC3 stereoplotter and with the Wild Elcovision system. The coordinates of 19 points are known, they were determined by surveying methods and used with success for the plotting.

Chapter (2): Literature Review

53

- Abbas, 1996, studied the accuracy of the Mobil Mapping System, which is mainly used for highway mapping. Hence, the study area was a road inside Assiut University campus, approximately 12 m width, and 100 m length. A nonmetric camera 35mm camera was employed as a data acquisition system with black and white films. The system has many potential applications besides high way applications, rail way applications, phone companies can employ this system to survey their telephone poles along a road. Cable companies can use it to measure the size of buildings and determine where to install cable lines. By using a GIS, these data can be efficiently retrieved when ever needed. It found that the positioning accuracy which can be obtained using the Mobile Mapping System is within 10 cm for targets at 55 m average distance in front of the van. - Ebrahiem, 1998, developed an approach for digital restitution and orthomapping of close range objects. The idea of this approach is an inversion of the photographic technique and is (on the contrary to the “rectification approach”) strictly object oriented. All of the objects are regarded to be describable in their geometrical shape by a number of particular faces that can be regular or irregular but can be created in a CAD environment. Using this approach, allows taking measurements of details on the facades. Accordingly, information of the original photos are available and can be presented as an orthoimages, perspective or parallel view and 3D map-covered object for visualisation e.g. in architecture and archaeology applications. - Stojic, 1998, used an automated DEM acquisition methods to generate dense elevation models of a controlled experimental flume to simulate sediment transport in a braided stream. A Pentax 645 non-metric camera was used to acquire all imagery, and uncertainties concerning the interior orientation of the camera were overcome using a self-calibrating bundle adjustment. The ERDAS

Chapter (2): Literature Review

54

IMAGINE OrthoMAX software package was used to derive all DEMs. The derived elevation models were used in a variety of ways to provide data of geomorphological significance. A study of the quality of derived data suggests that reliable estimates of sediment transport can theoretically be derived from the detection of morphological change alone, but it is very difficult to achieve in practice. - While close-range photogrammetry has been widely applied for static deformation analysis, video cameras have many characteristics that make them the sensors of choice for dynamic analysis of rapidly changing situations. C.K. Lee and W. Faig, 1999, studied a video system for monitoring dynamic objects. The system consists of two camcorders, a VCR, and a PC with frame grabber, to investigate the basic characteristics of the video camera and the frame grabber in static and dynamic modes. Sequential images of a moving car were captured and digitized at one-fifteenth of second intervals. The image coordinates of targets attached to a car were acquired by IDRISI, and the object coordinates were derived based on Direct Linear Transformation (DLT). And he found that image processing software for the PC (e.g., IDRISI) was a usable tool for image preprocessing and target centering. For a real-time photogrammetric system, its macro command must be improved to manipulate Windows without manual assistance. DLT was an appropriate model for the non-metric imagery even though its accuracy is slightly lower than for the self-calibration model. In conic1usion, a video system and appropriate photogrammetric principles provided the expected results for monitoring a moving car.

Chapter (3): Experimental Work

53

Chapter (3)

Methodology and Developed Tools For Experimental Work 3 . 1 Introduction: The factors which may affect the accuracy of the photogrammetric systems are discussed in section 2.8. Studying the effects of some of these factors is the main objective of this research. The factors investigated in this work are; number of control points, (B/H) ratio, and resolution. In order to study these factors a digital photogrammetric testing system is applied using non metric camera, a personal computer and a commercial flat bed scanner. The Layout of this system is shown in Fig 3.1. Sixteen photographs of the control field were taken using the non metric camera. These photographs were arranged to construct twenty photo models (stereo pairs). The range of object-camera distances is chosen to be variable from H=5.5m up to H=11.5m in steps of two meters, which leads to change the photo scale from about 1:20 to about 1:60. The exposure stations were arranged on four lines, each line has four stations which resulted in five stereo-pairs of photos with different (B\H) ratio. The convergent angles of all photos are taken within fifteen degrees. The layout of the relative positions of the exposure stations with respect to the control field is illustrated in Fig.3.2. Ground measurements of control points were carried out using total station, their ground coordinates were computed using base line method. The photos were converted to the digital form using an ‘Acer’ scanner with four different resolutions. The instruments were calibrated before use. 53

Chapter (3): Experimental Work

54

Field

Measuring horizontal and vertical angles by a total station.

Taking photos by a nonmetric camera

Scanner (Digitize the photos)

‘Base Line’ Program

‘PM’ Program

The ground coordinates of the points

(---.ck) file contains the coordinates of the check points

(---.gcf) file contains the coordinates of the control points

(---.apx) file contains approximate values of the exterior orientation

(---.cam) file contains the camera parameters and the additional parameters

(---.icf) file contains the photo coordinates

‘BSC’ Program

‘RMS’ Program

The root mean squares results

(---.dxf) file contains the computed ground coordinates

(---.opf) file contains exterior and interior orientation parameters

AutoCAD (---.dwg) file a complete drawing

Fig.3.1: Layout of the applied system and data flow in the experimental work

Chapter (3): Experimental Work

55

Fig.3.2: The layout of the relative positions of the exposure stations with respect to the control field

Chapter (3): Experimental Work

56

A description of the used instruments and the used software will be presented in this Chapter.

3.2 Instruments

This photogrammetric study includes a photographing system ( non metric camera), a photo coordinate measuring system, a control field with enough control and check points, and a computer to carry out the required mathematical calculations. The following is a description of these elements.

The PENTACX Non-Metric Camera:

Researchers in close range photogrammetry have clearly shown that, for numerous areas of applications, fully acceptable accuracy can often be achieved with non-metric cameras [Karara & Abdel-aziz, 1984]. So, it has been decided to use a nonmetric camera as a data acquisition system in this work. A (PENTAX ME-SUPER) non-metric camera was used. The camera is shown in Fig.3.3 and its features can be summarized as follows: •

It has a focal length equals to fifty millimeters.



The size of its film is 36x24 mm.



It has a light sensor to give optimum exposure time automatically and it can also be adjusted to manual mode which enabling free choice of exposure time. This may be helpful for adjusting the required time of exposure suitable for different objects with different backgrounds and light conditions.

Chapter (3): Experimental Work

57

Fig.3.3:The PENTAX ME-SUPER camera and its accessories

Chapter (3): Experimental Work

58

The Control Field:

A control field for close-range photogrammetry consists of a network of photo-identifiable points, their coordinates, referred to a horizontal and vertical datum, which have to be established. The design of a control network is very important. It will affect the pointing error, therefore the shape and size of targets should be carefully designed [Mohamed, 1985]. The used control field – as shown in Fig.3.4 - contains 76 points with different heights. This control field was designed and built by Ebraheem, 1992 as a part of his Msc. project. The size, shape and colour of the targets have been designed to give maximum degree of sharpness. The number and position of points in the control field is selected to check the accuracy in all directions.

Fig.3.4:The control field, Ebraheem, 1992

Chapter (3): Experimental Work

59

Total Station (TOPCON ITS-1):

The ground measurements of control points were carried out using ‘TOPCON ITS-1’ total station (Fig.3.5) provided by the Civil Engineering Department of Assuit University. Horizontal angles, vertical angles and the base line length were measured by the total station, in order to calculate the coordinates of the control points and the checkpoints based on the base line method. The minimum angular reading of the total station is one second and the minimum distance reading is one millimeter in lengths. The calibration of this instrument shows that measuring accuracy (standard deviation) is about five seconds for angular measurements and two millimeters in distance measurements.

Fig.3.5: The TOPCON ITS-1 total station

The Computer and the Scanner:

Chapter (3): Experimental Work

60

Many digital photogrammetric workstations are available in the market which have many advantages in speed and accuracy. But, the price of these stations with regard to the small budget available for this research pushed us to develop a photogrammetric digital system using a commercial scanner connected to a personal computer. The developed system, have also a good advantage that, enable us to change the parameter and the factors under studying freely. Fig.3.6 shows the used computer and scanner. The main technical specifications of these devices are:

-The personal computer specifications: •

Processor: 333 MH, (Pentium II).



Ram: 64MB random access memory.



Monitor: 14 inch.



Screen Card: Super Vega card - 4 MB.



Hard Disk: 6.4 GB.



Operating System: windows95. This computer is 100% compatible with IBM computers.

-The Scanner specifications: A commercial colour flat bed scanner Acer – Prisa –320P is used for digitizing the photos. The main features of the scanner are: •

Fast scan speed and a full- page color preview takes six seconds.



36 bit, over 68.7 billion colors for true-to-life pictures.



300 x 600 dpi optical resolution.

Chapter (3): Experimental Work

61



9600 x 9600 dpi maximum resolution.



The driver program of the scanner is MiraScan. It is TWAINcompliant and designed to be user-friendly.



The scanner is derived by Photo Express software which allows some image enhancement techniques such as sharpness, contrast, and focusing.

Fig.3.6: The Computer and the Scanner

3 . 3 Computer Programs (Software)

Chapter (3): Experimental Work

62

In order to carry out the required measurements and the computations of this investigation the following computer programs were used. Some of these programs were prepared by the author. - ‘Photo Measurement’ program: developed to carry out the photo measurements. - ‘Base Line’ program: developed to calculate the object space ground coordinates. - ‘BSC’ program: a ready made aerial triangulation program applies the bundle adjustment method. - ‘RMS’ program: developed in order to calculate the root mean squares. The followings are the main tasks that can be achieved by using these programs with some details concerning their developing, running, inputs, and outputs.

Chapter (3): Experimental Work

3.3.1

63

Photo Measurement program ‘PM’

The required photo measurements were carried out on softcopies (displayed on the screen of the computer) with the help of ‘Photo Measurement’ program. This program has been developed by the author in the Visual Basic language (VB5). Fig.3.7 shows a flowchart of this program. Overview of the measuring process: The current x and current y coordinates measured by the visual basic language are in twip units. The twip units are the measuring unit used in this design language which equal to 1/1440 inch, and the inch is equal to 96 pixels. The photo axis: The top edge of the photo presents the x-axis and the left edge of the photo presents the y-axis with their origin on the top left point. Transformation of the photo coordinates to the center point with mm unit: The current coordinates must be converted, first, to pixel units, then, converted to inch units by knowing the photo resolution (R). The inch units can transformed into mm units and then multiplied by the scale of the photo. Finally, the coordinates origin transformed to the center point of the photo by using the following equations: x1 = (x(current) * 96) / 1440

(pixel units)

x2 = x1 / R

(inch unis)

x3 = x2 * 25.4 * S

(mm units)

x4 = x3 – (picture width/2)

(mm units)

y1 = (y(current) * 96) / 1440

(pixel units)

y2 = y1 / R

(inch unis)

y3 = y2 * 25.4 * S

(mm units)

y4 = y3 – (picture height/2)

(mm units)

Chapter (3): Experimental Work

64

Start

Open photo (1) Input R= Resolution Input S = Scale

Open photo (2) Input R= Resolution Input S = Scale

x\ 1 = x(current) y\ 1 = y(current) x\ 2 = (x(current) * 96) / 1440 y\ 2 = (y(current) * 96) / 1440 x\ 3 = (x2/R) * 25.4 * S y\ 3 = (y2/R) * 25.4 * S x\ 4 = x3 – (picture width/2) y\ 4 = y3 – (picture height/2)

x1 = x(current) y1 = y(current) x2 = (x(current) * 96) / 1440 y2 = (y(current) * 96) / 1440 x3 = (x2/R) * 25.4 * S y3 = (y2/R) * 25.4 * S x4 = x3 – (picture width/2) y4 = y3 – (picture height/2)

Print photo (2) coor. to the MsflexGrid

Print photo (1) coor. to the MsflexGrid

Print the two photos coordinates to an (-.icf) file

Exit

Fig.3.7: Flowchart of the photo measurement program

Chapter (3): Experimental Work

65

The main features of the PM program

Fig.3.8 shows the form in the run mode of the program. The program has been designed to be used in an easy interactive way, its main features can be summarized as follows: •

The program can deal with photos have different resolutions and different scales, and the photos displayed on the screen with its actual size during the measuring process.



The program takes into consideration the compatibility between the monitor resolution and the output scanner resolution.



The program allows to move the two pictures together showing the same scene on each, as the stereo pair displayed on the screen and the two photos adjusted relative to each other to show the same scene, then, the ‘move pictures together’ command is checked.



Every click on the photo will plot a dot and draw a circle around the selected point. The point will automatically take a serial number printed near the circle.



The “undo” command can be used to change the choice of the last selected point.



The drawing dots can be hide at any time and can be redrawn again without any effect on the measuring coordinates.



The user can cancel the drawing points property with the help of ‘NoDrawing’ command and the measuring process will performed without any drawings on the photos.



The name of the photo can easily be changed as it is written in the text box under each photo to be the new name in the output file.

Chapter (3): Experimental Work

66

Fig.3.8: The form of the ‘Photo Measurement’ program in the run mode

Chapter (3): Experimental Work

67

The output of the PM program: 1.The measured image coordinates can be saved in an (- .icf) format (mm units), which can be accepted by the BSC program. 2.The image containing the marks can be saved in a new file with the same input photo format.

Limitations: 1.The program measures the coordinates relative to the center of the input photo, so the photo must be well selected during the scanning process or it may be cropped using any available image processing program. 2.The size of the accepted photos is limited by the (RAM) of the computer and the graphic card specifications. So, in case of low (RAM) one can cancel the drawing points with the help of ‘No Drawing’ command. Finally, The program is still under development to be used with aerial photos with fiducial marks. A list of this program is given in appendix (B), while a softcopy is given in the attached floppy disk.

Chapter (3): Experimental Work

68

3.3.2. ‘Base Line’ program

The object space coordinates of the control points and check points was measured using the base line method. The used coordinate system is right hand system with the X-axis in horizontal direction and parallel to the base line, the Y-axis is in the perpendicular vertical direction and the Z-axis is in the direction of the exposure axis as shown in Fig.3.9. The required measurements for determining the coordinates are horizontal and vertical angles to these points, as observed from the two ends of the base line. These measurements were performed using the ‘TOPCON’ total station. A computer program ‘Base Line’ is developed by the Visual Basic language to calculate the coordinates of the points. The mathematical model of coordinates computations is based on the following three equations: From the geometry of Fig.3.9;

The Control Field

Xp = XA +

Yp =

L * Sin(C2) * Cos(C1) Sin(C1 + C2)

L * Sin(C2) * tan(θ ) Sin(C1 + C2)

(3.1)

P

Y-Axis

X-Axis

(3.2) θ

Z-Axis

A

Zp = ZA −

L * Sin(C2) * Sin(C1) Sin(C1 + C2)

C2

C1

B

(3.3)

Base Line (L)

Fig.3.9

Where:

Chapter (3): Experimental Work

A, B

69

are the two end points of the base line, point (A) lies in the X-Z plane and line AB is parallel to the X-Y plane.

XA, YA, ZA

are the ground coordinates of point (A)

Xp, Yp, Zp

are the ground coordinates of point (P)

L

is the base line length

(measuring accuracy ± 2.0 mm)

C1

the HZ angle measured from point (A) (measuring accuracy ± 5``

C2

the HZ angle measured from point (B) (measuring accuracy ± 5`` )

θ

the VL angle measured from point (A) (measuring accuracy ± 5`` )

)

The form of the ‘Base Line’ program in the run mode is shown in Fig.3.10. The program features can be summarized as follows: •

The mathematical model of computing the coordinates is given in equations 3.1, 3.2, and 3.3.



The program has been designed to work in an interactive way.



It can accept the measured horizontal and vertical angles to each target, as measured from the two ends of the base line, either for point by point from the key board or for all points from an ASCII file. The results are printed to another ASCII file. Accuracy of the coordinates is estimated using the theorem of error propagation and presented in the following section.

2.3.3.1 Accuracy for the computed coordinates:

According to error propagation theorem equations (3.1, 3.2, 3.3) must be partially differentiated with respect to each measured variable (L, C1, C2, θ ) as follows:

Chapter (3): Experimental Work

70

Fig.3.10:The form of the ‘Base Line’ program in the run mode

Chapter (3): Experimental Work

71

Equation (3.1): Xp = XA +

∂X p ∂ L

∂X p ∂C1

L * Sin(C2) * Cos(C1) Sin(C1 + C2)

=

Sin(C2) * Cos(C1) Sin(C1 + C2)

=

L * Sin(C2) * [-Sin(C1)Sin(C1 + C2) - Cos(C1)Cos(C1 + C2)] (Sin(C1 + C2)) 2

=

L * Cos(C1) * [Cos(C2)Sin(C1 + C2) - Sin(C2)Cos(C1 + C2)] (Sin(C1 + C2)) 2

∂X p ∂C2

Equation (3.2):

Yp =

L * Sin(C2) * tan(θ ) Sin(C1 + C2)

∂Y p

=

Sin(C2) * tan(θ ) Sin(C1 + C2)

=

L * Sin(C2) * tan(θ ) * [-Cos(C1 + C2)] (Sin(C1 + C2)) 2

∂ L ∂Y p ∂C1 ∂Y p ∂C2

∂Y p ∂θ

=

L * tan(θ ) * [Cos(C2)Sin(C1 + C2) - Sin(C2)Cos(C1 + C2)] (Sin(C1 + C2)) 2

=

L * Sin(C2) * [Sec 2 (θ )] Sin(C1 + C2)

Chapter (3): Experimental Work

72

Equation (3.3): Zp = ZA −

∂Z p ∂ L

∂Z p ∂C1 ∂Z p ∂C2

=−

L * Sin(C2) * Sin(C1) Sin(C1 + C2)

Sin(C2) * Sin(C1) Sin(C1 + C2)

=−

L * Sin(C1) * [Cos(C2)Sin(C1 + C2) - Sin(C2)Cos(C1 + C2)] (Sin(C1 + C2)) 2

=−

L * Sin(C2) * [Cos(C1)Sin(C1 + C2) - Sin(C1)Cos(C1 + C2)] (Sin(C1 + C2)) 2

Substituting in the following formulas will give the standard errors of the coordinates: (σ X p ) 2 = (

∂X ∂X ∂X σ L )2 + ( σ C1 ) 2 + ( σ C2 )2 ∂L ∂C1 ∂C 2

(σ Y p ) 2 = (

∂Y ∂Y ∂Y ∂Y σ L )2 + ( σ C1 ) 2 + ( σ C2 )2 + ( σθ )2 ∂L ∂C1 ∂C 2 ∂θ

(σ Z p ) 2 = (

∂Z ∂Z ∂Z σ L )2 + ( σ C1 ) 2 + ( σ C2 )2 ∂L ∂C1 ∂C 2

A computer program is developed to apply the above formulas for all measured points on the control field, the results shown in table 3.1. A sketch of the control field with the point’s numbers is shown in Fig.3.11.

Chapter (3): Experimental Work

73

Table3.1:Estimated errors in the measured points according to error propagation The estimated errors in (mm) Point No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39

(X)Direction

(Y)Direction

(X-Y)Plane

(Z)Direction

Vector (R)

0.1 0.2 0.3 0.5 0.5 0.5 0.5 0.3 0.2 0.1 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.1 0.2 0.3 0.5 0.5 0.5 0.5 1.7 1.9 2.1 2.2 2.2 2.2 2.2 2.2 2 1.9 1.7 1.6 1.6

0.3 0.3 0.3 0.5 0.6 0.8 1 1.1 1.1 1.1 1 1 0.8 0.6 0.5 0.2 0.1 0.2 0.4 0.5 0.5 0.5 0.4 0.2 0.1 0.2 0.3 0.3 0.3 0.5 0.6 0.8 1 1 1.1 1.1 1.1 1 0.8

0.3 0.4 0.4 0.7 0.8 0.9 1.1 1.1 1.1 1.1 1 1 0.8 0.6 0.5 0.3 0.2 0.3 0.4 0.5 0.5 0.6 0.6 0.5 0.5 0.5 1.7 1.9 2.1 2.3 2.3 2.3 2.4 2.4 2.3 2.2 2 1.9 1.8

4 4 3.9 4.1 4 3.9 3.9 4 4 4 4 4 4 3.9 3.9 4.1 4 4.1 4.1 4.1 4 4.1 4 4.1 4.1 4 4 3.9 3.9 4 3.9 3.8 3.9 4 3.9 3.9 4 3.9 3.9

4 4 3.9 4.2 4.1 4 4.1 4.2 4.2 4.1 4.1 4.1 4.1 4 3.9 4.1 4 4.1 4.1 4.1 4 4.1 4.1 4.1 4.1 4 4.4 4.3 4.4 4.6 4.5 4.5 4.6 4.7 4.5 4.5 4.5 4.3 4.3

Chapter (3): Experimental Work

40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76

1.6 1.6 1.6 1.6 1.6 1.6 1.7 1.9 2.1 2.2 2.2 2.2 2.2 1.4 1.3 1.1 0.9 0.8 0.6 0.6 0.8 0.9 1.1 1.3 1.4 1.4 1.3 1.1 0.9 0.8 0.6 0.6 0.8 0.9 1.1 1.3 1.4

74 0.6 0.5 0.2 0.1 0.2 0.3 0.5 0.5 0.5 0.4 0.2 0.1 0.2 0.4 0.4 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.3 0.3 0.4 0.4 0.4 0.4 0.4 0.4 1 1 1 1 1 1

1.7 1.7 1.6 1.6 1.6 1.6 1.8 2 2.2 2.2 2.2 2.2 2.2 1.5 1.4 1.2 1 0.9 0.7 0.7 0.9 0.9 1.1 1.3 1.4 1.5 1.4 1.2 1 0.9 0.7 1.2 1.3 1.3 1.5 1.6 1.7

4 4.1 4 4 4.1 4.1 4.1 4 4.1 4.1 4 4.1 4 4.1 4.2 4.1 4.2 4.2 4.2 4.1 3.9 4 4 4 4.1 4.1 4.1 4.1 4.1 4.1 3.9 4 4 4 3.8 4 4

4.3 4.4 4.3 4.3 4.4 4.4 4.5 4.5 4.6 4.7 4.6 4.7 4.6 4.4 4.4 4.3 4.3 4.3 4.3 4.2 4 4.1 4.2 4.2 4.3 4.4 4.3 4.3 4.2 4.2 4 4.2 4.2 4.2 4.1 4.3 4.4

Chapter (3): Experimental Work

75

Fig.3.11: Sketch of the control field with the point’s numbers The following remarks may be estimated from the results: •

The standard deviation for the X-direction is about 2 mm.



The standard deviation for the Y-direction is about 2 mm.



The standard deviation for the X-direction and the Y-direction are noticed to be increase with the increase of their coordinates.



The standard deviation for the (X-Y) plane is computed and found to be not exceed 2 mm.



The standard deviation for the Z-direction is about 4 mm for all points.



The maximum standard deviation for the space vector (r) is about 5 mm.

Chapter (3): Experimental Work

76

3.3.3 ‘BSC’ program

Several software packages specially designed for non-metric photography are available in the photogrammetric market. Such programs have a variety of analytical approaches and suitable for specific applications (see Sec. 2.7.6). ‘BSC’ (Bundle with SelfCalibration) program developed and written by Dr. Kurt Novak, from the Ohio State University, is used in this research. The objective of the bundle solution is to compute accurate object-space coordinates of points from their photographic coordinates. Basically, a bundle is created by each exposure of the camera. The perspective center and the points in the image define this bundle in a local image coordinate system. The bundle solution tries to shift and rotate these bundles of light-rays so that they fit to some given control points and that all conjugate light-rays intersect at the corresponding object points. Additionally, constraints can be included such as distances measured between object points or perspective centers, to further stabilize the solution [Novak, 1991]. Bundle Adjustment is based on the collinearity equations (2.14, 2.15). It represents a non-linear mathematical relationship, which can not directly be applied to least squares. A linearization is necessary, which implies the need of approximate values. Dependent on the geometry of the block of photographs the quality of the approximations may vary. Once the least squares adjustment is solved, one can also compute parameters describing the precision of the results. These values are only theoretical precision and do not necessarily correspond

Chapter (3): Experimental Work

77

to the real accuracy of the computed coordinates [Novak, 1991].

The program is structured into three groups of functions (see Fig.3.12): 1. The input of data 2. The adjustment of data 3. The output of the results

DATA INPUT

ADJUSTMENT

DATA OUTPUT

Image coordinates Least squares solution

Orientation

Constraining of

Adjusted

parameters

coordinates

parameters

Ground coordinates

Camera parameters

Approximation of unknowns Computation of precision and correlation Distances

Protocol of computation

Chapter (3): Experimental Work

78

Fig.3.12: The three groups of functions in the BSC program, Novak, 1991

The followings are the main features of the ‘BSC’ program as given by its manual: •

BSC allows the user to read image coordinates, ground coordinates, which can be used as control points, predefined camera parameters, approximations of unknown parameters, such as tie-points and exterior orientation parameters of camera stations, and distances between object points and perspective centers.



In the least squares solution, various parameters can be treated as known or unknown by manipulating their weights in the matrices, For example, if the coordinates of the object space points are desired, the camera position and orientation parameters can be held fixed, if known, and the solution becomes an intersection procedure. Or, the camera parameters may be treated as unknowns in the solution and solved for along with the point coordinates in a simultaneous solution. If the calibration of the camera is not known precisely and the geometry of the problem permits, the camera parameters can be treated as unknowns. i.e. the user can constrain certain parameters so that they do not change during the adjustment, specifically the additional parameters, as well as the exterior orientation of the bundles.



One can obtain an output of the residuals of the observations, the precision of the computed parameters, and the correlation between the parameters of the interior orientation.



The output segment allows the user to display all parameters currently stored in memory as well as to write the orientation parameters to a separate file.

Chapter (3): Experimental Work



79

The output of the program is a (-.dxf) file format which supported for all adjusted coordinates, so that the user can display the results in the AutoCAD



A listing of all computations can be printed to a file.



The program can check the accuracy of the coordinates of the point and calculate the root mean squares for (X-Y) plan and (Z) direction. The Manual of Non-Topographic Photogrammetry, 1989, illustrated that for work of extremely high accuracy, a powerful technique is to treat all of the camera orientation parameters and point coordinates as unknowns, this technique allows the very precise photogrammetric measurements to escape contamination by a ground survey which may be less precise or very hard to obtain. Accordingly, all the camera parameters are treated as unknowns, and the two camera stations are also defined as two different cameras to get best ground coordinates accuracy.

3.3.4. ‘RMS’ program (designed):

The BSC program allows computing the root mean squares errors in the (X-Y) plane and in the Z-direction only. In order to study the errors in the (X and Y) directions and the space vector of error (R) a computer program is developed specially for this purpose. This program ‘RMS’ is developed in the Visual Basic language to calculate the root mean squares of the difference between the resulted ground coordinates obtained by photogrammetry and the computed ground coordinates obtained by ground surveying. The program is designed to accept directly the output file of the BSC program in form of (-.dxf) file. The results are printed in ASCII file

Chapter (3): Experimental Work

80

which includes the mean errors for all the points and the computed root mean squares values.

Chapter (4): Analysis of Results

80

Chapter (4)

Results and their Analysis

4.1 Introduction

In this work, a series of tests have been carried out in order to study the effects of some factors on the accuracy of the space coordinates computed by digital photogrammetric techniques. The research program is prepared to study the effects of the following factors: 1-

Number of control points.

2-

(B/H) ratio.

3-

Resolution. The first two factors are traditional factors, but the third one is limited to digital photogrammetry as it concerned to the size of the pixel and the process of digitizing the photos. The bundle adjustment with self-calibration program ‘BSC’ is used in this work. To get maximum accuracy, the exterior and the interior orientation parameters of the camera station are set free in the bundle adjustment. The two camera stations are also treated separately; that means focal length, perspective center, and additional parameters are changeable for each exposure station. Six additional parameters are used in this program. The six additional parameters are two for radial distortions, two for decentering distortions, and two affinity parameters. Several tests have been carried out in this research to study the effects of the factors mentioned before. The tests are conducted using the control field

80

Chapter (4): Analysis of Results

81

shown in Fig.3.3. These tests are presented here in the same order as they were carried out during the research work. For each test the difference between the ground coordinates and the computed coordinates of the check points were calculated then the root mean squares error was calculated as follows: n

( RMS ) X =

∑V i =1

2 Xi

n

(4 - 1)

VX i = X G i - X C i

Where; (RMS) the root mean squares error of the check points in certain direction (X). Vx

the difference between the ground coordinates and the computed coordinates in a certain direction (X).

n

number of the check points. Similar formulas were used for the (Y) direction, the (X-Y) plane, the (Z)

direction and the space vector (R) to compute the root mean squares values for each direction. The (X-Y) plane represents the vector of the displacement of the checkpoints in a plane perpendicular to the axis of the camera and the space vector (R) represents the vector of the displacement of the checkpoints in the space.

Chapter (4): Analysis of Results

82

4.2 Number of Control Points

The main objective of this set of tests is to determine the optimum number of control points that is required to reconstruct a single model in order to obtain the best space coordinates accuracy. Photogrammetric control points can be any points whose positions are known in any object space reference coordinates system and whose images can be positively identified in the photographs. The used control field contains 76 points which arranged in nine groups, in order to allow changing the used number of control points from eight to sixteen points. The distribution of the used control points in the control field is shown in Fig.4.1. To determine the optimum number of control points, a test was carried out using a large scale stereo pair. The object-camera distance is equal to five and half meters (the smallest tested camera-object distance) with B/H ratio equals to 0.7. The obtained photos were digitized with the highest available optical resolution for the scanner (300dpi). The root mean squares errors (RMS), for check points in the (X) direction, the (Y) direction, the (X-Y) plane, the (Z) direction and the space vector (R), were computed for each group of control points and given in table 4.1. These results are presented graphically in Fig.4.2. The RMS of (X) direction in case of using eight control points is 0.31cm and it decreases to 0.22cm at eleven control points, i.e. the percentage of improvement is about 29%. From eleven control points up to sixteen control points the curve is relatively constant (no significant improvement is obtained). The RMS of (Y) direction, in case of using eight control points is 0.28cm and it decreases to 0.15cm at eleven control points, i.e. the percentage of improvement is about 46%. There is no significant improvement of accuracy by

Chapter (4): Analysis of Results

83

Fig.4.1: The distribution of the control points in the used control field

Chapter (4): Analysis of Results

84

increasing the number of control points from eleven to sixteen control points. The RMS of (X-Y) plane in case of using eight control points is 0.43cm and it decreases to 0.27cm at eleven control points, i.e. the percentage of improvement is about 37%. There is no significant improvement of accuracy by increasing the number of control points from eleven to sixteen control points. The RMS of (Z) direction in case of using eight control points is 0.52cm and it decreases to 0.34cm at eleven control points, i.e. the percentage of improvement is about 34%. There is no significant improvement of accuracy by increasing the number of control points from eleven to sixteen control points. The RMS of the space vector (R) which presents the whole vector space of error in case of using eight control points is 0.67cm and it decreases to 0.43cm at eleven control points, i.e. the percentage of improvement is about 35 %. There is no significant improvement of accuracy by increasing the number of control points from eleven to sixteen control points. From this discussion, it is evident that the accuracy increases with the increase of the number of control points up to a certain limit, then it becomes constant or the improvement becomes insignificant. This limit is eleven control points for each of X-direction, Y-direction, (X-Y) plane, Z-direction, and the space vector (R). Accordingly, using eleven control points can be considered satisfactory for the accuracy requirements. On the other hand one can realize that, for all tested configurations the maximum RMS is found in the Z-direction while the minimum RMS is found in the Y-direction.

Chapter (4): Analysis of Results

85

Table 4.1: The RMS of check points using different number of control points No. of control points

RMS (cm) (X)Direction (Y)Direction

(X-Y)Plane

(Z)Direction

Vector (R)

8

0.31

0.28

0.43

0.52

0.67

9

0.27

0.26

0.38

0.40

0.56

10

0.23

0.18

0.29

0.37

0.47

11

0.22

0.15

0.27

0.34

0.43

12

0.22

0.15

0.27

0.33

0.43

13

0.23

0.15

0.27

0.34

0.44

14

0.23

0.15

0.27

0.34

0.44

15

0.23

0.15

0.27

0.35

0.44

16

0.23

0.16

0.28

0.33

0.44

0.7

RMS(cm)

0.6

(X)direction



(Y)direction (X-Y)Plane

(Z)Direction Vector (R)

0.5



0.4









0.3 

0.2



















































11

12

13

14

15

16

0.1 0 8

9

10

Number of control points

Fig.4.2: The relationship between the number of control points and the root mean squares error (RMS) in check points.

Chapter (4): Analysis of Results

86

4.3 Base Line Length to Object Camera Distance Ratio (B/H)

In order to determine the ratio of (B/H) which gives the best space coordinates accuracy, a test on a stereo-pairs with object-camera distance equal to five and half meters is carried out. The optimum number of control points and the highest resolution available (300 dpi) were used. Five different configurations of the (B/H) ratio were tested in which the (B/H) ratio changed from (0.2) to (0.7). Table 4.2 shows the (B/H) ratio and the root mean squares error (RMS) for check points in the (X) direction, the (Y) direction, the (X-Y) plane, the (Z) direction and the space vector (R). These results are presented graphically in Fig.4.3. The RMS of (X) direction is 0.34cm at (B/H) ratio equal to 0.2 and it decreases to 0.22cm at (B/H) ratio equal to 0.6, i.e. the percentage of improvement is about 35%. No significant improvement in accuracy accesses when increasing the (B/H) ratio from 0.6 to 0.7. The RMS of (Y) direction is 0.28cm at (B/H) ratio equal to 0.2 and it decreases to 0.15cm at (B/H) ratio equal to 0.6, i.e. the percentage of improvement is about 46%. No significant improvement in accuracy accesses when increasing the (B/H) ratio from 0.6 to 0.7. The RMS of (X-Y) plane is 0.44cm at (B/H) ratio equal to 0.2 and it decreases to 0.29cm at (B/H) ratio equal to 0.6, i.e. the percentage of improvement is about 34%. No significant improvement in accuracy accesses when increasing the (B/H) ratio from 0.6 to 0.7. The RMS of (Z) direction is 0.99cm at (B/H) ratio equal to 0.2 and it decreases to 0.34cm at (B/H) ratio equal to 0.6, i.e. the percentage of improvement is about 65%. No significant improvement in accuracy accesses

Chapter (4): Analysis of Results

87

when increasing the (B/H) ratio from 0.6 to 0.7. It can be noticed that the percentage of improvement in (Z) direction is about twice of the improvement in (X-Y) plane. The RMS of the space vector (R) is 1.08cm at (B/H) ratio equal to 0.2 and it decreases to 0.45cm at (B/H) ratio equal to 0.6, i.e. the percentage of improvement is about 60%. No significant improvement in accuracy accesses when increasing the (B/H) ratio from 0.6 to 0.7. From figure (4.2), it can be noticed that the root mean squares error for the (X-Y) plane, the (Z) direction and the space vector (R) decreasing with increasing of (B/H) ratio. The best accuracy was obtained at (B/H) ratio equal to (0.6) in X-direction, at (B/H) ratio equal to (0.7) in Y-direction, and at (B/H) ratio equal to (0.6) in Z-direction. Also, it can be noticed that the rate of improvement in the root mean squares error in the (X-Y) Plane is less than the rate of improvement in Z-direction. From this investigation, it can be concluded that the optimum (B/H) ratio, which can be satisfactory for the accuracy requirements, lies between 0.6 to 0.7. It was also realized that the identification and sharpness of control points is affected by changing the camera position, where they appear sharper when they facing the camera.

Chapter (4): Analysis of Results

88

Table 4.2: The RMS of check points using different (B/H) ratios RMS (cm) (B/H) ratio (X)Direction (Y)Direction

(X-Y)Plane

(Z)Direction

Vector (R)

0.2

0.34

0.28

0.44

0.99

1.08

0.4

0.29

0.25

0.38

0.77

0.90

0.5

0.25

0.19

0.31

0.36

0.48

0.6

0.22

0.17

0.29

0.34

0.45

0.7

0.22

0.15

0.27

0.33

0.43

1.4 (X) direction (Y) direction (X-Y) plane

1.2

(Z)Direction

RMS(cm)

 1

Vector (R)



0.8 0.6 0.4

0.2



0 0.2

0.3















0.4

0.5

0.6

0.7

(B\H) ratio

Fig.4.3: The relationship between the (B/H) ratio and the root mean squares error (RMS) for check points.

Chapter (4): Analysis of Results

89

4.4 Resolution

Two types of resolutions were considered in this study: The digitizing resolution of hard copy photographs and the ground resolution (see sec. 2.7.2). The ground pixel size (g.p.s.) is function of photo scale and digitizing resolution ,scanner pixel size, (s.p.s.). This relationship can be represented graphically as shown in Fig.4.4. The figure represents the photographing and printing conditions as applied in this study where: The camera focal length (f) = 50 mm, Film size (negative) = 24 × 36 mm, Size of printed hard copy photographs = 100 × 150 mm, From the geometry of this figure: 50 36 = h p 150

Plane of the film

of the photo ositive)

hp

150x100mm

pixel size on the photo ground pixel size

H

H

=

hp

Exposure station

hp = 7500 / 36 mm

Average scale =

36x24mm

f

f 36 = h p 150

7500 / 36 25.4 / Resolution = H ground pixel size

∴ g.p.s. =

25.4 * 36 H * 7500 R

(4 - 2) Average plane of the control field

Fig.4.4: Ground pixel size

Chapter (4): Analysis of Results

90

Where, f

is the focal length

hp

is the printing distance

H

is the camera-object distance

R

is the digitized resolution of the photo

g.p.s.

is the ground pixel size

From equation (4-2), it can be noticed that the ground pixel size is function of the resolution (R) and the camera object distance (H). It is obviously, that the ground pixel size increases with the increase of the camera object distance at a certain resolution. On the other hand, the ground pixel size decreases with the increase of the resolution at the same camera object distance. Applying equation (4-2) to the cases under investigation resulting in table 4.3:

Table 4.3: The Ground pixel size in (cm) for different camera object distances. Ground pixel size (g.p.s) (cm)

Camera – object distance H (m)

50dpi

100dpi

200dpi

300dpi

5.5

1.34

0.67

0.34

0.22

7.5

1.83

0.91

0.46

0.31

9.5

2.32

1.16

0.58

0.39

11.5

2.80

1.40

0.70

0.47

Chapter (4): Analysis of Results

91

4.4.1 Digitizing Resolution

To study the effect of the digitizing resolution on the accuracy of the space coordinates, the stereo pairs with optimum number of control points and optimum (B/H) ratio were used. The tests were carried out at different values of camera-object distance (H). Case 1 : at H = 5.5 m Case 2 : at H = 7.5 m Case 3 : at H = 9.5 m Case 4 : at H = 11.5 m Four different digitizing resolutions (50dpi, 100dpi, 200dpi, and 300dpi) were tested in each case. The digitization was performed based on optical resolution, as the resolution resulted by the interpolation can cause some sort of smoothing to the digitized images and increase size of image data. The obtained results for each case can be summarized as follows:

- Case 1: (at H = 5.5 m)

Table 4.4 shows the root mean squares error (RMS) for check points in the (X) direction, the (Y) direction, the (X-Y)plane, the (Z) direction and the space vector (R) at object-camera distance equal to five and half meters. The results are presented graphically in Fig.4.5. The RMS of (X) direction at resolution 50dpi is 0.50cm and it decreases to 0.17cm at resolution 300dpi, i.e. the percentage of improvement is about 66 %. The RMS of (Y) direction at resolution 50dpi is 0.44cm and it decreases to 0.20cm at resolution 300dpi, i.e. the percentage of improvement is about 54 %.

Chapter (4): Analysis of Results

92

The RMS of (X-Y) plane at resolution 50dpi is 0.67cm and it decreases to 0.45cm at resolution 100dpi, i.e. the percentage of improvement is about 32%, the percentage of improvement is about 13% from resolution 100dpi to resolution 200dpi, and the percentage of improvement is about 30% from resolution 200dpi to resolution 300dpi. The RMS of (Z) direction at resolution 50dpi is 1.23cm and it decreases to 0.58cm at resolution 100dpi, i.e. the percentage of improvement is about 52 %, the percentage of improvement is about 13% from resolution 100dpi to resolution 200dpi, and the percentage of improvement is about 30% from resolution 200dpi to resolution 300dpi. The RMS of the space vector (R) at resolution 50dpi is 1.40cm and it decreases to 0.73cm at resolution 100dpi, i.e. the percentage of improvement is about 47%. The percentage of improvement is about 13% from resolution 100dpi to resolution 200dpi, and the percentage of improvement is about 30% from resolution 200dpi to resolution 300dpi. From this discussion and the graphical representation, it is obvious that the obtained accuracy is improved in different rates with the increase of resolution. The improvement seems to be more significant up to (100dpi) resolution, where the percentage of improvement in the accuracy at the first 50dpi (from 50dpi to 100dpi) is about three times the improvement of the next 100dpi (from 100dpi to 200dpi). On the other hand, the percentage of improvement in the last 100dpi (from 200dpi to 300dpi) is about twice the improvement in the middle 100dpi (from 100dpi to 200dpi). Moreover the results indicate that more improvements is expected if one uses better scanner (with high optical resolution) than that available in this research.

Chapter (4): Analysis of Results

93

Table 4.4: The RMS errors of check points using different scanning resolutions

RMS (cm) Resolution (X)Direction (Y)Direction

(X-Y)Plane

(Z)Direction

Vector (R)

50

0.50

0.44

0.67

1.23

1.40

100

0.37

0.26

0.45

0.58

0.73

200

0.28

0.24

0.39

0.50

0.63

300

0.17

0.20

0.27

0.35

0.44

1.6 1.4 

RMS(cm)

1.2

(X) direction

 (Y) direction

(Z)Direction

Vector (R)

(X-Y) plane

1 0.8 0.6



0.4 

0.2















0 50

100

150

200

250

300

Resolution(dpi)

Fig.4.5: The relationship between the scanning resolution of the photos and the root mean squares error (RMS) for check points. At H=5.5m

Chapter (4): Analysis of Results

94

- Case 2: (at H = 7.5 m)

Table 4.5 shows the root mean squares error RMS for check points in the (X) direction, the (Y) direction, the (X-Y)plane, the (Z) direction and the space vector (R) when object-camera distance equal to seven and half of meters. The results are presented graphically in Fig.4.6. The RMS of (X) direction at resolution 50dpi is 0.72cm and it decreases to 0.20cm at resolution 300dpi, i.e. the percentage of improvement is about 72 %. The RMS of (Y) direction at resolution 50dpi is 0.59cm and it decreases to 0.23cm at resolution 300dpi, i.e. the percentage of improvement is about 61 %. The RMS of (X-Y) plane at resolution 50dpi is 0.93cm and it decreases to 0.53cm at resolution 100dpi, i.e. the percentage of improvement is about 43%, the percentage of improvement is about 26% from resolution 100dpi to resolution 200dpi, and the percentage of improvement is about 25% from resolution 200dpi to resolution 300dpi. The RMS of (Z) direction at resolution 50dpi is 2.04cm and it decreases to 0.91cm at resolution 100dpi, i.e. the percentage of improvement is about 55 %, the percentage of improvement is about 22% from resolution 100dpi to resolution 200dpi, and the percentage of improvement is about 25% from resolution 200dpi to resolution 300dpi. The RMS of the space vector (R) at resolution 50dpi is 2.24cm and it decreases to 0.91cm at resolution 100dpi, i.e. the percentage of improvement is about 53%. The percentage of improvement is about 14% from resolution 100dpi to resolution 200dpi, and the percentage of improvement is about 33% from resolution 200dpi to resolution 300dpi. From this discussion and the graphical representation, it is evident that the

Chapter (4): Analysis of Results

95

obtained accuracy is improved at different rates with the increase of resolution. The improvement seems to be more significant up to resolution of 100dpi. Where the percentage of improvement in the accuracy at the first 50dpi (from 50dpi to 100dpi) is about three times the improvement of the next 100dpi (from 100dpi to 200dpi). On the other hand, the percentage of improvement in the last 100dpi (from 200dpi to 300dpi) is about twice the improvement in the middle 100dpi (from 100dpi to 200dpi).

Chapter (4): Analysis of Results

96

Table 4.5: The RMS errors of check points using different scanning resolutions

RMS (cm) Resolution (X)Direction (Y)Direction

(X-Y)Plane

(Z)Direction

Vector (R)

50

0.72

0.59

0.93

2.04

2.24

100

0.38

0.36

0.53

0.91

1.05

200

0.30

0.24

0.39

0.71

0.90

300

0.20

0.23

0.29

0.53

0.60

2.5

RMS(cm)

 2

(X)Direction

(Y)Direction

(Z)Direction

Vector (R)

(X-Y)Plane

1.5 1

0.5















0 50

100

150

200

250

300

Resolution(dpi)

Fig.4.6: The relationship between the scanning resolution of the photos and the root mean squares error (RMS) for check points. At H=7.5m

Chapter (4): Analysis of Results

97

- Case 3: (at H = 9.5 m)

Table 4.6 shows the root mean squares error RMS for check points in the (X) direction, the (Y) direction, the (X-Y)plane, the (Z) direction and the space vector (R) when object-camera distance equal to nine and half of meters. The results are presented graphically in Fig.4.7. The RMS of (X) direction at resolution 50dpi is 0.86cm and it decreases to 0.21cm at resolution 300dpi, i.e. the percentage of improvement is about 75 %. The RMS of (Y) direction at resolution 50dpi is 0.71cm and it decreases to 0.25cm at resolution 300dpi, i.e. the percentage of improvement is about 64 %. The RMS of (X-Y) plane at resolution 50dpi is 1.11cm and it decreases to 0.67cm at resolution 100dpi, i.e. the percentage of improvement is about 39%, the percentage of improvement is about 35% from resolution 100dpi to resolution 200dpi, and the percentage of improvement is about 23% from resolution 200dpi to resolution 300dpi. The RMS of (Z) direction at resolution 50dpi is 2.61cm and it decreases to 1.13cm at resolution 100dpi, i.e. the percentage of improvement is about 56 %, the percentage of improvement is about 26% from resolution 100dpi to resolution 200dpi, and the percentage of improvement is about 27% from resolution 200dpi to resolution 300dpi. It can be noticed that at the (X-Y) plane curve, the percentage of improvement at the first 50dpi is relatively the same as the improvement of the next 100dpi. On the other hand, at the (Z) direction curve the percentage of improvement at the first 50dpi is about twice as the improvement of the next 100dpi. The RMS of the space vector (R) at resolution 50dpi is 2.84cm and it

Chapter (4): Analysis of Results

98

decreases to 1.32cm at resolution 100dpi, i.e. the percentage of improvement is about 54%. The percentage of improvement is about 30% from resolution 100dpi to resolution 200dpi, and the percentage of improvement is about 26% from resolution 200dpi to resolution 300dpi. From this discussion and the graphical representation, it is clear that the obtained accuracy is increased in different rates with the increase of resolution. The improvement seems to be more significant up to resolution of 100dpi. Where the percentage of improvement in the accuracy at the first 50dpi (from 50dpi to 100dpi) is about two times the improvement of the next 100dpi (from 100dpi to 200dpi). On the other hand, the percentage of improvement in the last 100dpi (from 200dpi to 300dpi) is relatively the same as the improvement in the middle 100dpi (from 100dpi to 200dpi).

Chapter (4): Analysis of Results

99

Table 4.6: The RMS errors of check points using different scanning resolutions

RMS (cm) Resolution (X)Direction (Y)Direction

(X-Y)Plane

(Z)Direction

Vector (R)

50

0.86

0.71

1.11

2.61

2.84

100

0.51

0.43

0.67

1.13

1.32

200

0.32

0.29

0.43

0.83

0.92

300

0.21

0.25

0.33

0.60

0.68

3

RMS(cm)

2.5



(X)Direction

 (Y)Direction

(Z)Direction

Vector (R)

(X-Y)Plane

2 1.5 1

 0.5

 









0 50

100

150

200

250

300

Resolution(dpi)

Fig.4.7: The relationship between the scanning resolution of the photos and the root mean squares error (RMS) for check points. At H=9.5m

Chapter (4): Analysis of Results

100

- Case 4: (at H = 11.5 m)

Table 4.7 shows the root mean squares error RMS for check points in the (X) direction, the (Y) direction, the (X-Y)plane, the (Z) direction and the space vector (R) when object-camera distance equal to eleven and half of meters. The results are presented graphically in Fig.4.8. The RMS of (X) direction at resolution 50dpi is 1.23cm and it decreases to 0.23cm at resolution 300dpi, i.e. the percentage of improvement is about 81%. The RMS of (Y) direction at resolution 50dpi is 0.94cm and it decreases to 0.27cm at resolution 300dpi, i.e. the percentage of improvement is about 71%. The RMS of (X-Y) plane at resolution 50dpi is 1.55cm and it decreases to 0.82cm at resolution 100dpi, i.e. the percentage of improvement is about 47%, the percentage of improvement is about 30% from resolution 100dpi to resolution 200dpi, and the percentage of improvement is about 38% from resolution 200dpi to resolution 300dpi. The RMS of (Z) direction at resolution 50dpi is 3.62cm and it decreases to 1.92cm at resolution 100dpi, i.e. the percentage of improvement is about 47%, the percentage of improvement is about 57% from resolution 100dpi to resolution 200dpi, and the percentage of improvement is about 27% from resolution 200dpi to resolution 300dpi. The RMS of the space vector (R) at resolution 50dpi is 3.93cm and it decreases to 2.09cm at resolution 100dpi, i.e. the percentage of improvement is about 46%. The percentage of improvement is about 52% from resolution 100dpi to resolution 200dpi, and the percentage of improvement is about 29% from resolution 200dpi to resolution 300dpi. From this discussion and the graphical representation, it is evident that the

Chapter (4): Analysis of Results

101

obtained accuracy is increased at different rates with the increase of resolution. The improvement seems to be more significant up to resolution of 100dpi. It can be also noticed that the rate of improvement at the (X-Y) plane, the (Z) direction, and the space vector (R) at the first 50dpi is relatively the same. The improvement in the (RMS) at the (Z) direction in the next 100dpi is higher than the (X-Y) plane. At the last100dpi the improvement in the (RMS) at the (X-Y) plane is higher than the (Z) direction.

Chapter (4): Analysis of Results

102

Table 4.7: The RMS errors of check points using different scanning resolutions

RMS (cm) Resolution (X)Direction (Y)Direction

(X-Y)Plane

(Z)Direction

Vector (R)

50

1.23

0.94

1.55

3.62

3.93

100

0.57

0.57

0.82

1.92

2.09

200

0.46

0.33

0.57

0.84

1.00

300

0.23

0.27

0.35

0.61

0.71

5 4 

(X)Direction

(Y)Direction

(Z)Direction

Vector (R)

(X-Y)Plane

RMS(cm)



3 

2

1 











0 50

100

150

200

250

300

Resolution(dpi)

Fig.4.8: The relationship between the scanning resolution of the photos and the root mean squares error (RMS) for check points. At H=11.5m

Chapter (4): Analysis of Results

103

4.4.2 Ground Resolution

In order to emphasize the effect of ground pixel size on the obtained accuracy from measurements on digital images, the obtained results in sec 4.4.1 are represented here as relationship between the root mean squares error computed using equation (4-1) for the (X-Y) plane, the Z-direction, and the space vector (R) versus the computed ground pixel size calculated using equation (4-2) shown in table 4.8, then a mathematical relationship is established based on curve fitting. The curve fitting is carried out by applying the (CurveFit) program. This program is written by ‘T. S. Cox’ in basic language. The program is based on equations listed in “curve fitting for programmable calculators” written by William M. Kolb. It designed to fit the values of X and Y data to a twenty-five different equations. The degree of the similarity between the data and the fitted equations calculated for each one. The program chooses the equation with the best fit automatically and prints this equation. The obtained mathematical relationship can be used to estimate the expected accuracy obtained using the developed system with respect to the ground pixel size. The following sections illustrate the plotted curves and the obtained fitted equations in the (X-Y) plane, the Z-direction, and the space vector (R).

`

Chapter (4): Analysis of Results

104

Table 4.8: The ground pixel size (table 4.3) with the root mean squares error of the X-Y plane, the Z-direction and the space vector (R) resulted from the experimental work in centimeter tables (4.4 to 4.7). Cameraobject distance (m)

Digitizing resolution (dpi)

Ground pixel size(cm) (equation 4-2)

X-Y Plane

Z-Direction

Space vector (R)

11.5

50

2.80

1.55

3.62

3.93

9.5

50

2.32

1.11

2.61

2.84

7.5

50

1.83

0.93

2.04

2.24

11.5

100

1.40

0.82

1.92

2.09

5.5

50

1.34

0.67

1.23

1.40

9.5

100

1.16

0.67

1.13

1.32

7.5

100

0.91

0.53

0.91

1.05

11.5

200

0.70

0.57

0.84

1.00

5.5

100

0.67

0.45

0.58

0.73

9.5

200

0.58

0.57

0.83

0.92

11.5

300

0.47

0.35

0.61

0.71

7.5

200

0.46

0.39

0.71

0.9

9.5

300

0.39

0.33

0.6

0.68

5.5

200

0.34

0.37

0.5

0.63

7.5

300

0.31

0.29

0.53

0.60

5.5

300

0.22

0.27

0.33

0.43

RMS in cm (equation 4-1)

Chapter (4): Analysis of Results

105

- The (X-Y) plane:

Table 4.8 shows the ground pixel size with the root mean squares error of (X-Y) plane obtained from the experimental work in centimeters. The results presented graphically as shown in Fig.4.9. From this graphical representation, it can be noticed that the root mean squares error of (X-Y) plane increases with the increase of the ground pixel size. A second order equation found to give the best fit for the points and the fitted equation is as follows: RMS (X-Y plane) = 0.2455 + 0.2793 (g.p.s.) + 0.0436 (g.p.s.)2 The degree of fitting of the points with the equation is computed by the (CurveFit) program and it is found to be 96.15%.

RMS of (X-Y) plane(cm)

1.6



1.4 1.2



1   

0.8 0.6 0.4

  

0.2

 





0 0

0.5

1

1.5

2

2.5

3

Ground Pixel Size

Fig.4.9: The relationship between the ground pixel size with the root mean squares error of (X-Y) plane and the fitted curve

Chapter (4): Analysis of Results

106

- The (Z) Direction:

Table 4.8 shows the ground pixel size with the root mean squares error of (Z) direction obtained from the experimental work in centimeter. The results presented graphically as shown in Fig.4.10. From this graphical representation, it is evident that the root mean squares error of (Z) direction increases with the increase of the ground pixel size. A second order equation found to give the best fit for the points. The fitted equation is as follows: RMS (Z) = 0.3152 + 0.5149 (g.p.s.) + 0.1878 (g.p.s.)2 The degree of fitting of the points with the equation is computed and it is found to be 96.90%.

RMS of (Z) direction (cm)

4 

3 

2



1

       





 

0 0

0.5

1

1.5

2

2.5

3

Ground Pixel Size

Fig.4.10: The relationship between the ground pixel size with the root mean squares error of (Z) direction and the fitted curve

Chapter (4): Analysis of Results

The space vector (R):

Table 4.8 shows the ground pixel size with the root mean squares error of the space vector (R) obtained from the experimental work in centimeter. The results presented graphically as shown in Fig.4.11. From this graphical representation, it is clear that the root mean squares error of the space vector (R) increases with the increase of the ground pixel size. A second order equation found to give the best fit for the points. The fitted equation is as follows: RMS (Vector (R)) = 0.4029 + 0.5726 (g.p.s.) + 0.1911 (g.p.s.)2 The degree of fitting of the points with the equation is computed and it is found to be 97.12%.

5 RMS of vector (R) (cm)

-

107

4



3

 

2     

1





 

0 0

0.5

1

1.5

2

2.5

3

Ground Pixel Size

Fig.4.11: The relationship between the ground pixel size with the root mean squares error of the space vector (R) and the fitted curve

Chapter (4): Analysis of Results

108

Fig.4.12 summarize the relationships between the ground pixel size and the root mean squares error in (X-Y) plane, (Z) direction, and the space vector (R). From this figure, it is evident that the accuracy in the Z-direction is more deteriorated with increasing of ground pixel size compared with the deterioration of the accuracy in the (X-Y) plane.

5

RMS(cm)

4



3 2  





 

 





 



 

  

1 0 0

0.5

1



 



 



1.5

2









2.5

3

Ground Pixel Size 



(X-Y)Plane (Z)Direction Vector (R)

Fig.4.12: The relationship between the ground pixel size and the RMS of (X-Y) plane, (Z) direction, and the space vector (R)

Chapter (5): Engineering Applications

109

Chapter (5)

Digital Photogrammetric Applications

5.1 Introduction

Since the early days, photogrammetrists, as mapmakers, have attempted to apply their technique outside the field of topographic mapping. Applications other than topographic mapping have been encompassed by the term “nontopographic photogrammetry”. For relatively small objects, the term “closerange photogrammetry” has also been used. i.e. the term ‘non-topographic’ and the term ‘close-range’ were descriptors of photogrammetric applications in areas other than topographic mapping. The development of non-topographic photogrammetry has followed the general photogrammetric developments in the topographic mapping area. Currently, one of the main areas of application of non-topographic photogrammetry are architecture. Such as the mapping of the exterior of historical buildings (facades), and industrial engineering, such as surveying mating faces of ships built in halves to determine the fit of the two halves before they were launched and brought together, or synoptically recording measurable deformations in engineering models [ASPRS, 1989]. The following section illustrates the architecture applications applied in this study.

109

Chapter (5): Engineering Applications

110

5.2 Architectural Application

The developed system was used for mapping facades of three structures inside the campus of Assuit University. The output file obtained from the BSC program for each case is further processed using AutoCAD to draw the facades. The details of photographing, digitizing, accuracy analysis, and drawings were given in the following next sections.

5.2.1 First Case: The Facade of the Gate of Assiut University Campus:

The main steps carried out to map this façade using the developed digital photogrammetric technique are outlined in the following sections:

-Photographing: A stereo-pair of photographs were taken by a non-metric camera for one of the main gates of Assiut University campus. The used camera has the same technical specifications of that camera used in the experimental work. The obtained optimum photographing conditions were considered. The object camera distance is about 40 meters and the distance between the two camera stations is about 28meters. i.e., the B/H ratio is about 0.7 (see Fig.5.1). -Measuring of ground control and check points: Topcon (ITS-1) total station is used to measure the horizontal and vertical angles of a twenty natural point. The space coordinates of these points were computed by the base line method. Eleven points were used as

Chapter (5): Engineering Applications

111

control points. The remaining nine points were used as checkpoints. Fig.5.2 illustrates the used control and check points in the field. -Digitizing the photos: The photos were digitized using the Acer scanner with the highest available optical resolution 300dpi and the estimated ground pixel size is 1.63 cm (see Sec.4.4). -Measuring of image coordinates: The photo measurement program is used to display the photos on the screen of the computer and then the required image coordinates were measured. These points were reference points (control and check points) and a group of points which form the main details of the façade. The obtained file is an ‘-.icf’ file format accepted by the BSC program. -Applying the bundle adjustment method: The BSC program is used to determine the interior and the exterior orientation of the cameras then to compute the ground coordinates of the measured points of the façade. The root mean squares errors of the check points was calculated using equation 4-1 and found to be about 1.5cm (i.e. about 0.9g.p.s.) in the plane of the façade and about 2.11cm (i.e. about 1.3g.p.s.) for the perpendicular axis. -Drawing the facade: The resulted coordinates from the BSC program is a ‘-.dxf’ file opened in a CAD environment see Fig.5.3. These object points were displayed on the screen and connected together uses the AutoCAD software. The connected points give the main details of the facade, then the drawing is completed with suitable hatching as shown in Fig.5.4.

112

Fig.5.1: Layout of the first study case

Chapter (5): Engineering Applications

Chapter (5): Engineering Applications

113

Fig.5.2: The right photo of the main gate of Assuit University showing control and check points

Control and Check points

Chapter (5): Engineering Applications

114

Fig.5.3: The ‘-.dxf ’file containing all the points on the Gate

Fig.5.4: The Façade of the Assuit University Gate after AutoCAD

Chapter (5): Engineering Applications

115

Chapter (5): Engineering Applications

116

5.2.2 Second Case: The Facade of the Old Building of Assiut University:

The main steps carried out to map the façade of this Building using the developed digital photogrammetric technique will be outlined in the following sections: -Photographing: A stereo-pair of photographs were taken by a non-metric camera for the old Assiut University building. The used camera has the same technical specifications of that camera used in the experimental work. The obtained optimum photographing conditions were considered. The object camera distance is about 20.0 meters and the distance between the two camera stations is about 14.0 meters. i.e., the B/H ratio is about 0.7 (see Fig.5.5). -Measuring of ground control and check points: Same Topcon (ITS-1) total station is used to measure horizontal and vertical angles of a twenty-six natural point. The space coordinates of these points were computed by the base line method. Eleven points were used as control points. The remaining fifteen points were used as checkpoints. The photo shown in Fig.5.6 illustrates the used control and check points in the field. -Digitizing the photos: The photos were digitized using the Acer scanner with the highest available optical resolution 300dpi and the estimated ground pixel size is about 0.82cm (see Sec.4.4). -Measuring of image coordinates: The photo measurement program is used to display the photos on the screen of the computer and then the required image coordinates were

Chapter (5): Engineering Applications

117

measured. These points were reference points (control and check points) and a group of points which form the main details of the façade. The obtained file is an ‘-.icf’ file format accepted by the BSC program. -Applying the bundle adjustment method: The BSC program is used to determine the interior and the exterior orientation of the cameras then to compute the ground coordinates of the measured points of the façade. The accuracy of the check points is found to be about 0.80cm (i.e. about 1.0g.p.s.) in the plane of the façade and about 1.18cm (i.e. about1.44g.p.s.) for the perpendicular axis. -Drawing the facade: The resulted coordinates from the BSC program is a ‘-.dxf’ file opened in a CAD environment see Fig.5.7. These object points were displayed on the screen and connected together uses the AutoCAD software. The connected points give the main details of the facade, then the drawing completed with suitable hatching as shown in Fig.5.8.

Fig.5.5: Layout of the second case of study

Chapter (5): Engineering Applications

118

Chapter (5): Engineering Applications

119

Fig.5.6: The right photo of the old building of Assuit University showing control and check points

Control and Check points

Fig.5.7: The ‘-.dxf ’file containing all the points of the facade

Chapter (5): Engineering Applications

120

Fig.5.8: The Façade of the old building of Assuit University after AutoCAD

Chapter (5): Engineering Applications

121

5.2.3 Third Case:The Facade of the Faculty of Medicine –Assiut University:

This application represents one of the cases where positions of object to be mapped and the surrounding details force the photographer to deviate from the optimum photographing conditions. The main steps carried out to map this façade using the developed digital photogrammetric technique were stated in the following sections:

-Photographing: A stereo-pair of photographs were taken inside the campus of Assuit University by a non-metric camera for the façade of the faculty of medicine. The natural constraints in the field force to select the camera positions as shown in Fig.5.9. As a result of these constrains a wide angle lens was used (with focal length equal to 28mm) in order to be able to photograph the façade. The Left object camera distance is about 40.7 meters and The right object camera distance is about 45.8 meters i.e. the mean object distances is about 44.0meters and the distance between the two camera stations is about 152.0 meters. So, the B/H ratio is calculated and found to be about 3.45. -Measuring of ground control and check points: Topcon (ITS-1) total station is used to measure horizontal and vertical angles of a twenty-two natural point. The space coordinates of these points were computed by the base line method. Eleven points were used as control points. The remaining eleven points used as checkpoints, The photo shown in Fig.5.10 illustrates the used control and check points in the field.

Chapter (5): Engineering Applications

122

-Digitizing the photos: The photos were digitized using the Acer scanner. The scanning resolution is selected to obtain ground pixel size about 3.32cm (see sec.4.4). -Measuring of image coordinates: The photo measurement program is used to display the photos on the screen of the computer and then the required image coordinates were measured. These points were reference points (control and check points) and a group of points which form the main details of the façade. The obtained file is an ‘-.icf’ file format accepted by the BSC program. -Applying the bundle adjustment method: The BSC program is used to determine the interior and the exterior orientation of the cameras then to compute the ground coordinates of the measured points of the façade. The root mean squares of the check points is found to be about 3.00cm (i.e. about 0.9g.p.s.) in the plane of the façade and about 4.81cm (i.e. about 1.4g.p.s.) for the perpendicular axis. -Drawing The facade: The resulted coordinates from the BSC program is a ‘-.dxf’ file opened in a CAD environment see Fig.5.11. These object points were displayed on the screen and connected together uses the AutoCAD software. The connected points give the main details of the facade, then the drawing completed with suitable hatching as shown in Fig.5.12.

Chapter (5): Engineering Applications

123

Fig.5.9: Layout of the third Case of study

Chapter (5): Engineering Applications

124

Chapter (5): Engineering Applications

125

Fig.5.10: The right photo of the faculty of medicine showing control and check points

Control and Check points

Fig.5.11: The ‘-.dxf ’file containing all the points of the façade of the faculty of medicine in Assuit University

Fig.5.12: The Façade of the faculty of medicine Assuit University

Chapter (5): Engineering Applications

126

Chapter (6): Comments and Recommendations

125

Chapter (6)

Conclusions and Recommendations

6.1 Conclusions:

The objective of this investigation is to study some of the factors affecting the accuracy of the ground coordinates that computed by the means of photogrammetry for architectural applications. According to the analysis of the results obtained from the experimental work, using the non-metric camera, the following conclusions can be drawn:

1. The accuracy of the space coordinates increases with the increase of the number of control points up to a limit, in this work this limit is 11 points for one stereo pair. 2. The accuracy of the space coordinates increases with the increase of the (B/H) ratio, the obtained optimum ratio is about 0.6 to 0.7. 3. The accuracy of the space coordinates increases significantly with the increase of the digitizing resolution of photos up to 200dpi then the rate of increasing become lower. 4. The accuracy of the space coordinates increases with the decrease of the ground pixel size. A relationship between the accuracy and the ground pixel size was obtained. 5. This research work is extended to apply the developed digital photographing technique for mapping facades of three buildings inside the campus of Assuit University. It has been found that the obtained

125

Chapter (6): Comments and Recommendations

126

accuracy in such application was about (18 to 20) μ m of photographic scale in the plane of the façade and about (26 to 28) μ m in the perpendicular plane that correspond to about (0.9 to 1.0) from the pixel size in the plane of the façade and about (1.3 to 1.4) from the pixel size in the perpendicular plane.

6.2 Recommendations for future work:

On the basis of the experience obtained during this research work, the following recommendations for future studies can be given: 1. Future work should study experimentally the improvements in the accuracy of the computed ground coordinates using higher digitizing resolution which may be required for some specific applications. 2. Studying other factors affecting the accuracy of the ground coordinates when using digital cameras. 3. It is expected that the stereoscopic observations improves the obtained accuracy for photogrammetric applications, accordingly, it is recommended to analyze the effects of these factors with stereoscopic observation.

References

127

References Abbas, A.M., 1996, ‘Accuracy Analysis for New Close-Range Photogrammetric Systems’, M.Sc. thesis, Al-Minia university, Egypt. Abdel-Aziz,Y.I. ,1982, ‘The Achieved Accuracy from a Closed-Range Photogrammetric System’, Proceedings of ACSM-ASP Fall Convention, Hollywood, Florida. American Society of Photogrammetry (ASP), 1980, ‘Manual of Photogrammetry’, Fourth Edition. American Society of Photogrammetry and Remote Sensing (ASPRS), 1989, ‘Manual of Non-Topographic Photogrammetry’, second edition. Astori, B., Bezoari, G., and Guzzetti, F., 1992, ‘Analogue and Digital Methods in Architectural Photogrammetry’, International Archives of Photogrammetry and Remote Sensing, Commission V, pp. 761-766. Atkinson,K.B.,l976,‘A Review of Close-Range Engineering Photogrammetry’ Photogrammetric Engineering and Remote Sensing, Vol.4.2, No.1. Babaei,R.M. ,l981, ‘Practical Applications and Development of a MultiStation/Multi-Stereo System’, Ph.D. Thesis, UMIST. Beyer, H.A., 1992, ‘Advances in Characterisation and Calibration of Digital Imagery Systems’, International Archives of Photogrammetry and Remote Sensing, Commission V, pp. 545-555. Bracken,P.A.;J.T.Dalton;J.J.Quann and J.B.Billingsley,l978 ‘AOIPS- An Interactive Image Processing System’ Proceedings of National Computer Conference, Vol.47, AFIPS Press, Montvale, New Jersey. Brown, D. C., 1971, ‘Close range Camera Calibration’, Photogrammetric Engineering, 37(8): 855- 866.

References

128

Capanni, G., and Flamigni, F., 1996, ‘SISCAM Softcopy Photogrammetric Workstation’, International Archives of Photogrammetry and Remote Sensing, Vienna, Ausrria, Volume XXXI, Part B2, pp. 41-45. Crombie,M.A.,l976, ‘Stereo Analysis of a Specific Digital Model Sampled from Aerial Imagery’, ETL Report - 0072.

Daniel G. Brown and Alan F. Arbogast, 1999, ‘Digital Photogrammetric Change Analysis as Applied to Active Coastal Dunes in Michigan’, Photogrammetric Engineering and Remote Sensing, Volume 65,No. 4, pp. 467- 476. Driscoll,R.S.;J.N.Reppert and R.C.Heller,1974 ‘Microdensitometry to Identify Plant Communities and Components on Color Infrared Aerial Photos’, Journal of Range Management, 27(1). Ebrahim, M.A-B, 1992, ‘Using Close Range Photogrammetry for Some Engineering Applications’, M.Sc. thesis, Assiut university, Egypt. Ebrahim, M.A-B, 1998, ‘Application and Evaluation of Digital Image Techniques in Close Range Photogrammetry’, Ph.D. thesis, Institute of Geodesy, University of Innsbruck, Austria, Innsbruck. Ebrahim, M.A-B, 1999, ‘Effect of The Images Resolution on The Accuracy of 3D-Measurement in Digital Close Range Photogrammetry’, Al Minia Conference, Egypt. El-Beik, A.M.A., and Hahani, R.3., 1984, ‘The Quadrustational CloseRange Photogrammetric System’, Photogrammetric Engineering and Remote Sensing 5C,(3). El-Hakim, S. F., 1986, ‘The Detection of Gross and Systematic Errors in the Combined Adjustment of Terrestrial and Photogrammetric Data’, Photogrammetric Engineering and Remote Sensing, 52 (1): 59-66. Faig, V., Shih, T.Y., and Liu, X.S., 1990, ‘A Non-Metric Camera System and Its Calibration’, ACSM-ASPRS Annual Convention Vol.5

References

129

Faig, W. and El-Hakim, S. F., 1982, ‘The Use of Distances as Object Space Control in Close-Range Photogrammetry’, International Archives of Photogrammetry, 24 (5/1): 144-148. Faig, W., 1976, ‘Photogrammetric Potentials of Non-Metric Cameras’, Report of ISP Working Group V/2, Photogrammetric Engineering and Remote Sensing, 42 (1), 47-50. Farag A. F., 1999, ‘Implementation of Digital Photogrammetry on Optical Satellite’, Article Review, Civil Engineering Dept., Assiut University. Fraser Clive S., 1997, ‘Digital Camera Self-Calibration’, Journal of Photogrammetry and Remote Sensing, Vol. 52 (4), pp. 149-159. Fraser, C. S., 1982, ‘On the Use of Non-Metric Cameras in Analytical Close-Range Photogrammetry’, The Canadian Surveyor: 36 (3): 259-279. Gambino,L.A. and M.A.Crombie, 1979, ‘Manipulation and Display of Digital Topographic Data’, Second Symposium on Automation Technology in Engineering Drawing, Naval Postgraduate School, Monterey, Ca. Georgopoulos, A., 1996, ‘Towrds an Operational Digital Video Photogrammetric System for 3-D Measurements’, International Archives of Photogrammetry and Remote Sensing, Volume XXXI, Vienna, pp. 111116. Gruen, Armin, 1996, ‘Digital Photogrammetric Stations Revised’, International Archives of Photogrammetry and Remote Sensing, Volume XXIX, Part B2, Commission 2, pp. 127-134. Hanke, K. And Ebrahim, M. A-B., 1997, ‘A Low Cost 3D-Measurement Tool for Architectural and Archaeological Applications’, CIPA International Symposium, Vol. XXXII, Part 5C1B, GOteborg, Sweden. Hasi-Gerd Maas, 1999, ‘Image Sequence Based Automatic Multi-Camera System Calibration Techniques’, Journal of Photogrammetry and Remote Sensing, Vol. 54 (5-6), pp. 352-359.

References

130

Helava, U.V., Hornbuckle, J.A., and Shahan, A.J., 1972, ‘Digital Processing and Analysis of Image Data’, Bendix Technical Journal. Henri Veldhuis, 1997, ‘The 3D Reconstruction of Straight and Curved Pipes Using Digital Line Photogrammetry’, Journal of Photogrammetry and Remote Sensing, Vol. 53 (1), pp. 6-16. Hottier,P.H., 1976, ‘Accuracy of Close-Range Analytical Restitutions : Practical Experiments and Prediction’, Photogrammetric Engineering and Remote Sensing, 42(3). International Society for Photogrammetry and Remote Sensing (ISPRS), 1992. James, T., 1999, ‘The Cost of a Digital Workstation’, Journal of Photogrammetry and Remote Sensing. Jepsen,P.L., 1976, ‘The Software/Hardware Interface for Interactive Image Processing at the Image Processing Laboratory of the Jet Propulsion Laboratory’, Proceeding of the Digital Equipment Computer User’s Society. Joachim Hhle, 1997, ‘Computer-Assisted Teaching and Learning in Photogrammetry’, Journal of Photogrammetry and Remote Sensing, Vol. 52 (6), pp. 266-276. Kang, J.M., Oh, W.J, and Bae, Y.S., 1996, ‘Large Scale Geographic Information Acquaisition By 35mm Camera’, International Archives of Photogrammetry and Remote Sensing, Vienna, Ausrria, Volume XXXI, Part B4, pp. 431-436. Karara, H.M., Abdel-Aziz, Y.J, 1984, ‘Accuracy Aspects of Non-Metric Imageries’, Photogrammetric Engineering Vol.50, No.9. Karara, U.N., 1985, ‘Close-range Photogrammetry Where Are We and Where Are We’, heading 7, Photogrammetric Engineering And Remote Sensing 51,(5)

References

131

Konecny, G., 1985, ‘The International Society for Photogrammetry and Remote Sensing – 75Years Old’, Photogrammetric Engineering and Remote Sensing, 51 (7), 919-933. Lee C.K. and W. Faig, 1999, ‘Dynamic Monitoring with Video Systems’, Photogrammetric Engineering & Remote Sensing, Vol. 65, No. 5, pp. 589595. Lemmer,J.F., 1982, ‘The Potential Synergy of Photogrammetry and Computer Vision’, Proceedings of the ACSM-ASP 48th. Annual Meeting,Denver, Colorado. Light, D. L., ‘C-Factor for Softcopy Photogrammetry’, 1999, Photogrammetric Engineering and Remote Sensing, Volume 65,No. 6, pp. 667- 669. Macarovic, B., and Tempfli, K., 1979,‘Digitizing Images for Automatic Processing in Photogrammetry’, ITC Journal. Mahajan, S.K., and Singh,V., 1972, ‘Comparison of Analytical RelativeOrientation Methods’, Journal of the Surveying and Mapping Division of ASCE, July. Mahani, R.B., 1981, ‘Close-Range Photogrammetry Practical Applications And Development Of A Multi-Station/Multi-Stereo System’, Ph.D. Thesis, University of Manchester. Mannos,J.L., 1980, ‘The Next Generation of Softcopy Image Displays’ Proceedings of the ACSM-ASP 46th. Annual Meeting, St. Louis, Mo. Manual of U- Lead System’s I-photo Deluxe User Manual, 1992. Marks, W.G., and Asce, A.M., 1976, ‘Image Error and Photogrammetric Required’, Journal of the Surveying and Mapping Division of ASCE. Martin J. Smith and Douglas G. Smith, 1996, ‘Operational Experiences Of Digital Photogrammetric Systems’, International Archives of Photogrammetry and Remote Sensing, Volume XXIX, Part B2, Commission2, pp. 357-362.

References

132

Mathias Lemmens 1991, GIM international, the Worldwide Magazine for Geomatics, January 1999. Mikhail, E. M. and Mulawa, D. C., 1985 ‘Geometric Form Fitting in Industrial Metrology Using Computer-Assisted Theodolites’. Mikhail,G.M., 1983, ‘Photogrammetric Target Location to Sub-Pixel Accuracy in Digital Images’. Mohamed, A.A., 1985, ‘Quantitative Imagery Analysis In Close-Range Photogrammetry’, Ph.D. thesis, University of Manchester, December. Mohamed, A.A., 1997, ‘Block Adjustment’, Article Review, Civil Engineering Dept., Assiut University. Molv, V., Oleynik, S., Gajda, V., and Zotov, G., 1996, ‘Digital Photogrammetric Station “DELTA”’, International Archives of Photogrammetry and Remote Sensing, Vol. XXXI, Part B2, Vienna. Newby, P.R.T., 1996, ‘Digital Images in the Map Revision Process’, Journal of Photogrammetry and Remote Sensing, Vol. 51 (4), pp. 188-195. Norvelle,F.R., 1981, ‘Interactive Digital Correlation Techniques for Automatic Compilation of Elevation Data’, Proceedings of the ACSM-ASP 47th. Annual Meeting, Washington,D.C. Novak, K., 1991, ‘Bundle Adjustment with Self-Calibration (BSC) User Manual’, Ohaio State University. Opitz,B.K.,J.Becker, and H.Cook, 1982, ‘The Defense Mapping Agency’s Pilot Digital Operations’ Proceedings of the ACSM-ASP 48th. Annual Meeting, Denver, Colorado. Pivencka, Frantisek; Vladimir Cervenka, Karel Charvat, and Ales Limpouch, 1996, ‘Cost-Effective Digital Photogrammetry’, International Archives of Photogrammetry and Remote Sensing, Volume XXIX, Part B2, Commission2, pp. 3 00-305.

References

133

Schrock,B.L. , 1980, ‘Applications of Digital Displays in Photo Interpretation and Digital Mapping’, Proceedings of the ACSM-ASP 46th. Annual Meeting, St. Louis, Mo. Schwartz,D.5., 1982, ‘Close-Range Photogrammetry for Aircraft Quality Control’, Proceedings of the ACSM-ASP 48th. Annual Meeting, Denver, Colorado. Scott Mason, Heinz Ruther, and Julian Smit, 1997, ‘Investigation of the Kodak DCS46O digital camera for small-area mapping’, Journal of Photogrammetry and Remote Sensing, Vol. 52 (5), pp. 202-214. Stojic, J. Chandler, P. Ashmore, and J. Luce, 1998, ‘Assessment of Sediment Transport Rates by Automated Digital Photogrammetry’, Photogrammetric Engineering & Remote Sensing, Vol. 64, No. 5, May 1998, pp. 387—395. Thurgood,J.D., and E.M.Mikhail, 1982, ‘Photogrammetric Aspects of Digital Images’, Proceedings of the ACSM-ASP 48th. Annual Meeting, Denver, Colorado. Torlegard, K., 1976, ‘State of the Art of Close-Range Photogrammetry’, Photogrammetric Engineering and Remote Sensing 42, (1). Wolf, P.R., 1974, ‘Element of Photogrammetry’, International Student edition, Mc—Graw Hill Xogakusha, Ltd. Wolf,P.R., and J.C.Storey, 1982, ‘Automatic DTM Generation from Digitized Image Densities’ Proceedings of the ACSM-ASP 48th. Annual Meeting, Denver, Colorado.

Appendix

134

Appendix ‘A’ Sample of the Used Computer Files Example (1): -----------------(At H=11.5 m) ---.ICF (Image Coordinates File) p1r300 1 -12.93368 0.52832 p1r300 11 -14.478 -7.29488 p1r300 19 -13.72616 7.59968 p1r300 28 8.56488 -0.34544 p1r300 34 12.52728 -9.57072 p1r300 49 13.38072 7.88416 p1r300 56 -1.81864 8.55472 p1r300 62 -0.84328 1.05664 p1r300 73 -2.97688 -8.45312 p1r300 68 -2.36728 -0.87376 p1r300 46 7.2644 9.81456 p1r300 9 -11.34872 -8.41248 p1r300 3 -9.76376 0.46736 p1r300 5 -7.874 -3.37312 p1r300 7 -8.4836 -7.19328 p1r300 13 -14.59992 -4.7752 p1r300 15 -15.06728 -1.19888 p1r300 17 -14.45768 4.14528 p1r300 23 -7.69112 7.86384 p1r300 25 -7.34568 3.88112 p1r300 27 6.73608 -0.12192 p1r300 29 10.8204 -0.28448 p1r300 31 12.71016 -4.63296 p1r300 33 12.60856 -8.87984 p1r300 35 10.25144 -10.93216 p1r300 39 4.318 -6.03504 p1r300 41 4.9276 -2.27584 p1r300 43 4.86664 4.02336 p1r300 45 5.29336 7.68096 p1r300 47 9.0932 10.18032 p1r300 51 13.31976 3.6576 p1r300 53 3.38328 8.85952

Appendix

p1r300 p1r300 p1r300 p1r300 p1r300 p1r300 p1r300 p1r300 p1r300 p1r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300

135

55 57 59 61 63 65 67 69 71 75 1 11 19 28 34 49 56 62 73 68 46 9 3 5 7 13 15 17 23 25 27 29 31 33 35 39 41 43 45

-0.3556 8.8392 -3.56616 8.4328 -5.76072 1.03632 -2.63144 1.07696 1.02616 0.97536 3.03784 -0.99568 -0.68072 -1.016 -4.23672 -0.87376 -6.18744 -8.14832 0.88392 -8.71728 -10.37336 0.48768 -12.83208 -8.71728 -12.36472 8.79856 12.16152 -0.24384 14.33576 -8.06704 14.84376 6.84784 0.9652 8.636 3.11912 1.016 0.64008 -8.55472 1.02616 -0.85344 9.92632 9.02208 -8.82904 -9.59104 -6.18744 0.42672 -4.88696 -3.69824 -4.43992 -7.86384 -12.73048 -5.7912 -12.3444 -1.58496 -12.30376 4.71424 -4.0132 8.39216 -4.70408 4.14528 10.04824 -0.04064 13.64488 -0.14224 15.20952 -3.8608 14.94536 -7.45744 13.50264 -9.42848 8.44296 -5.56768 7.93496 -2.032 8.46328 3.77952 8.03656 7.2136

Appendix

p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300 p4r300

1 11 19 28 34 49 56 62 73 68 46

136

47 51 53 55 57 59 61 63 65 67 69 71 75

12.01928 14.45768 6.57352 3.05816 -0.82296 -2.6924 1.31064 4.90728 6.20776 2.71272 -0.92456 -2.85496 4.48056

296.0439 275.0665 276.4202 534.9456 573.0514 575.909 416.2217 435.8166 412.8642 416.2817 516.1042

9.12368 3.2512 8.4328 8.75792 8.65632 1.07696 1.07696 0.95504 -0.87376 -1.016 -0.9144 -8.61568 -8.41248

---.GCF (Ground Coordinates File) 119.3149 106.0786 30.321 103.5524 199.251 91.4082 116.9461 113.9679 28.9542 103.2638 196.5403 94.5958 208.037 80.3393 127.6759 101.6764 27.0903 99.8049 106.9274 93.4862 217.6062 89.742

---.CAM (Camera Parameters) pentax 1/1/90 0.000000 0.000000 1 0.033438 0.055171 50.002146 -0.000015 0.000000 -0.000043 -0.000016 0.000721 0.001911 ppentax 1/1/90 0.000000 0.000000 1 0.027202 0.058094 50.001105 0.000047 -0.000000 0.000212 -0.000045 0.000078 0.000781

p1r300 p4r300

617.5 145 232.5 145

---.APX (Approximation File) 550 0 15 0 pentax 550 0 -15 0 ppentax

Appendix

137

The Results: ------------------.OPF (Orientation Parameter File) p1r300 595.532 141.794 593.995 -2.734006 16.971860 1.597937 0.000000 0.000000 0.007931 0.091566 49.994677 -0.000009 0.000000 0.000003 -0.000029 0.001488 0.002656 p4r300 251.562 142.216 594.809 -2.979817-17.029899 1.092131 0.000000 0.000000 0.057901 0.013038 50.015337 0.000046 -0.000000 0.000205 -0.000060 0.000382 0.000178 0 SECTION 2 ENTITIES 0 POINT 8 0 10 296.044 20 119.315 30 106.079 0 TEXT 8 0 10 296.544 20 119.815 30 106.579 40 0.500 1 1 0 POINT 8 0 10

---.DXF (Opened in the AutoCAD ) 10 275.067 276.920 20 20 30.321 199.751 30 30 103.552 91.908 0 40 TEXT 0.500 8 1 0 19 10 0 275.567 POINT 20 8 30.821 0 30 10 104.052 534.946 40 20 0.500 116.946 1 30 11 113.968 0 0 POINT TEXT 8 8 0 0 10 10 276.420 535.446 20 20 199.251 117.446 30 30 91.408 114.468 0 40 TEXT 0.500 8 1 0

Appendix

28 0 POINT 8 0 10 573.051 20 28.954 30 103.264 0 TEXT 8 0 10 573.551 20 29.454 30 103.764 40 0.500 1 34 0 POINT 8 0 10 575.909 20 196.540 30 94.596 0 TEXT 8 0 10 576.409 20 197.040 30 95.096 40 0.500 1

138

49 0 POINT 8 0 10 416.222 20 208.037 30 80.339 0 TEXT 8 0 10 416.722 20 208.537 30 80.839 40 0.500 1 56 0 POINT 8 0 10 435.817 20 127.676 30 101.676 0 TEXT 8 0 10 436.317 20 128.176 30 102.176 40 0.500 1

0 POINT 8 0 10 412.864 20 27.090 30 99.805 0 TEXT 8 0 10 413.364 20 27.590 30 100.305 40 0.500 1 73 0 POINT 8 0 10 416.282 20 106.927 30 93.486 0 TEXT 8 0 10 416.782 20 107.427 30 93.986 40 0.500 1 68

Appendix

0 POINT 8 0 10 516.104 20 217.606 30 89.742 0 TEXT 8 0 10 516.604 20 218.106 30 90.242 40 0.500 1 46 0 POINT 8 0 10 595.532 20 141.794 30 593.995 0 TEXT 8 0 10 596.032 20 142.294 30 594.495 40 0.500 p1r300 0

139

POINT 8 0 10 251.562 20 142.216 30 594.809 0 TEXT 8 0 10 252.062 20 142.716 30 595.309 40 0.500 1 p4r300 0 ENDSEC 0 EOF

Appendix

140

Calculation of The root mean squares (RMS): -----------------------------------------------------------The RMS in (cm) Point No. 2 3 4 5 6 7 9 10 12 13 14 15 16 17 18 20 21 22 23 24 25 26 27 29 30 31 32 33 35 36 37 38 39 40 41 42 43 44

(X)Direction

(Y)Direction

(X-Y)Plane

(Z)Direction

Vector (R)

0.1895 -0.2237 0.0809 0.0309 -0.401 0.0518 -0.1996 -0.3151 -0.1197 -0.0633 -0.4135 -0.265 0.3194 -0.1363 0.3621 0.2235 -0.307 0.1232 0.1866 0.3349 0.1108 -0.0509 -0.2029 -0.0602 0.1842 0.2553 0.5717 0.1893 0.2823 -0.0551 -0.5345 -0.2331 -0.019 -0.2592 -0.2813 -0.2179 -0.017 -0.1752

-0.1009 -0.1018 -0.2808 -0.2535 -0.3936 0.2443 0.0445 -0.0795 -0.2668 0.0324 -0.1588 -0.2416 -0.3365 0.0224 -0.1519 0.1817 0.0952 -0.0737 0.0849 0.0215 -0.065 -0.0379 0.0935 -0.2165 -0.0866 -0.2185 -0.2904 -0.007 -0.6107 0.2024 0.2074 -0.2001 -0.1335 -0.0834 0.0357 0.0955 -0.1173 0.1023

0.2146 0.2457 0.2922 0.2553 0.5618 0.2497 0.2045 0.3249 0.2924 0.0711 0.4429 0.3586 0.4639 0.1381 0.3926 0.288 0.3214 0.1435 0.205 0.3355 0.1284 0.0634 0.2234 0.2247 0.2035 0.336 0.6412 0.1894 0.6727 0.2097 0.5733 0.3072 0.1348 0.2722 0.2835 0.2379 0.1185 0.2028

-0.3498 -0.466 -0.7505 -0.3428 -0.243 0.4231 -0.2279 -0.2675 0.0672 0.2228 -0.2021 0.0473 0.2109 0.4662 0.7852 0.5215 0.4105 1.0056 -0.3145 0.0729 -0.0488 -0.2371 -0.1175 -0.3331 0.195 -0.1413 0.722 0.4001 -0.2943 0.1079 -1.4617 -0.4455 -0.4176 -0.416 -0.7906 -0.2278 -0.8198 -0.8899

0.4104 0.5268 0.8053 0.4274 0.6121 0.4913 0.3062 0.4209 0.3 0.2338 0.4868 0.3617 0.5096 0.4862 0.8779 0.5957 0.5213 1.0157 0.3754 0.3434 0.1374 0.2454 0.2524 0.4018 0.2818 0.3645 0.9656 0.4426 0.7343 0.2358 1.5701 0.5411 0.4388 0.4971 0.8399 0.3293 0.8283 0.9127

Appendix

45 47 48 50 51 52 53 54 55 57 58 59 60 61 63 64 65 66 67 69 70 71 72 74 75 76

141 -0.2962 -0.239 0.0862 0.626 0.1528 0.086 -0.4645 0.1082 -0.3679 0.3017 0.2194 0.099 -0.2011 0.0737 0.1028 -0.1283 -0.1493 -0.1662 -0.333 -0.0593 -0.1005 0.0304 0.1873 -0.2713 -0.0132 -0.2265

0.1735 0.3312 0.1135 -0.1186 -0.0665 0.3499 0.3652 0.0644 0.3408 -0.377 -0.1246 -0.2258 0.0953 -0.2306 -0.0895 0.1702 -0.2316 -0.0468 0.0015 -0.1535 -0.2542 0.0262 -0.1105 -0.4026 -0.2977 -0.2094

0.3432 0.4084 0.1425 0.6371 0.1666 0.3603 0.5908 0.1259 0.5014 0.4828 0.2523 0.2465 0.2225 0.242 0.1363 0.2131 0.2755 0.1726 0.333 0.1645 0.2733 0.0401 0.2174 0.4854 0.2979 0.3084

Number of points ckecked= 56 point RMS(X)= 0.23 cm Max mean error x= 0.63 cm at point 50 RMS(Y)= 0.27 cm Max mean error y=-0.61 cm at point 35 RMS(X-Y)= 0.35 cm Max mean error at x-y plane= 0.67 cm at point 35 RMS(Z)= 0.61 cm Max mean error z=-1.5589 cm at point 61 RMS(r)= 0.71 cm Max Space vector of error (r)= 1.58 cm at point 61

0.2515 0.5054 1.4957 -0.1489 -0.4827 0.0065 0.3079 -0.2268 0.2326 0.6325 0.7894 -0.2661 -0.0847 -1.5589 -0.9431 -1.2498 -0.7351 -0.7046 -0.6108 -0.8286 -0.2392 0.2889 0.1573 -0.5472 -0.9224 -0.4716

0.4255 0.6498 1.5024 0.6543 0.5106 0.3603 0.6662 0.2594 0.5528 0.7957 0.8287 0.3627 0.2381 1.5775 0.9528 1.2678 0.785 0.7254 0.6956 0.8447 0.3632 0.2916 0.2683 0.7315 0.9693 0.5635

Appendix

142

Appendix ‘B’ Computer Programs List of the Photo Measurements Program code -------------------------------------------------------------Dim MySite As Integer Dim MySite2 As Integer Dim p1, p2, flagg, serial As String Dim RH, RP1, RP2, S1, S2, fxl, fxr, fyl, fyr, Interpretation, Factorx, FactorY As Single Dim X1, Y1 Sub DrawDotsAgain() Picture2.Cls Picture2.DrawWidth = 1 For i = 0 To MSFlexGrid1.Rows - 1 MSFlexGrid1.Row = i MSFlexGrid1.Col = 0 If MSFlexGrid1.Text = "" Then MySite = 0 Exit Sub End If MySite = MSFlexGrid1.Text MSFlexGrid1.Col = 1 X = MSFlexGrid1.Text MSFlexGrid1.Col = 2 Y = MSFlexGrid1.Text Picture2.PSet (X, Y) Picture2.Circle (X, Y), 2 * 15 Picture2.CurrentX = X '+ 1 * 15 Picture2.CurrentY = Y '+ 1 * 15 Picture2.Print MySite Next End Sub Sub DrawDotsagain_2() Picture6.Cls Picture6.DrawWidth = 1 For i = 0 To MSFlexGrid2.Rows - 1 MSFlexGrid2.Row = i

Appendix

143

MSFlexGrid2.Col = 0 If MSFlexGrid2.Text = "" Then MySite2 = 0 Exit Sub End If MySite2 = MSFlexGrid2.Text MSFlexGrid2.Col = 1 X = MSFlexGrid2.Text MSFlexGrid2.Col = 2 Y = MSFlexGrid2.Text Picture6.PSet (X, Y) Picture6.Circle (X, Y), 2 * 15 Picture6.CurrentX = X '+ 1 * 15 Picture6.CurrentY = Y '+ 1 * 15 Picture6.Print MySite2 Next End Sub Private Sub Check4_Click() If Check4.Value = 1 Then Command2.Enabled = False Command7.Enabled = False Else Command2.Enabled = True Command7.Enabled = True End If End Sub Private Sub Command1_Click() CommonDialog1.Action = 1 temp = CommonDialog1.filename Picture2.Picture = LoadPicture(temp) Label3.Caption = temp RP1 = InputBox("Enter resolution of your photo from scanner", "Photo Resolution ", 200) S1 = InputBox("Enter Scale of your photo from scanner or if you made a resample to the photo", "Photo Scale", 100) Picture2.Move 0, 0 MSFlexGrid1.Cols = 5 MSFlexGrid1.Rows = 1 MySite = 0

Appendix

144

For i = 0 To 2 MSFlexGrid1.Col = i MSFlexGrid1.Text = "" Next Picture2.Move 0, 0 If Picture2.Height > Picture1.Height - HScroll1.Height Then VScroll1.Enabled = True VScroll1.Max = (Picture2.Height - Picture1.Height + HScroll1.Height + 30) / 30 Else VScroll1.Enabled = False End If If Picture2.Width > Picture1.Width - VScroll1.Width Then HScroll1.Enabled = True HScroll1.Max = (Picture2.Width - Picture1.Width + VScroll1.Width + 30) / 30 Else HScroll1.Enabled = False End If Picture3.Move VScroll1.Left, HScroll1.Top End Sub Private Sub Command11_Click() RH = InputBox("Enter resolution of your Hardware equl to one inch in VB5", "Hardware Resolution", 96) Factorx = InputBox("Enter The Enlargement factor of the photo at X-direction", "X-Direction Enlargment Factor", 1) FactorY = InputBox("Enter The Enlargement factor of the photo at Y-direction", "Y-Direction Enlargment Factor", 1) End Sub Private Sub Command2_Click() If MSFlexGrid1.Rows = 1 Then MSFlexGrid1.Col = 0 MSFlexGrid1.Text = "" MSFlexGrid1.Col = 1 MSFlexGrid1.Text = "" MSFlexGrid1.Col = 2 MSFlexGrid1.Text = "" MSFlexGrid1.Col = 3 MSFlexGrid1.Text = "" MSFlexGrid1.Col = 4 MSFlexGrid1.Text = ""

Appendix

145

Else MSFlexGrid1.Rows = MSFlexGrid1.Rows - 1 End If Call DrawDotsAgain End Sub Private Sub Command3_Click() p1 = Text1.Text On Error Resume Next Open "C:\image1.txt" For Output As #1 For i = 0 To MSFlexGrid1.Rows - 1 MSFlexGrid1.Row = i MSFlexGrid1.Col = 0 t = MSFlexGrid1.Text MSFlexGrid1.Col = 3 t = t & " " & MSFlexGrid1.Text MSFlexGrid1.Col = 4 t = t & " " & MSFlexGrid1.Text Print #1, p1; " "; t Next Close End Sub Private Sub Command4_Click() On Error Resume Next CommonDialog1.Action = 2 temp = CommonDialog1.filename SavePicture Picture2.Image, temp '"c:\image1.bmp" End Sub Private Sub Command5_Click() On Error Resume Next CommonDialog1.Action = 2 temp = CommonDialog1.filename SavePicture Picture6.Image, temp '"c:\image2.bmp" End Sub Private Sub Command6_Click() p2 = Text2.Text On Error Resume Next Open "C:\image2.txt" For Output As #2 For i = 0 To MSFlexGrid2.Rows - 1 MSFlexGrid2.Row = i

Appendix

146

MSFlexGrid2.Col = 0 t = MSFlexGrid2.Text MSFlexGrid2.Col = 3 t = t & " " & MSFlexGrid2.Text MSFlexGrid2.Col = 4 t = t & " " & MSFlexGrid2.Text Print #2, p2; " "; t Next Close End Sub Private Sub Command7_Click() If MSFlexGrid2.Rows = 1 Then MSFlexGrid2.Col = 0 MSFlexGrid2.Text = "" MSFlexGrid2.Col = 1 MSFlexGrid2.Text = "" MSFlexGrid2.Col = 2 MSFlexGrid2.Text = "" MSFlexGrid2.Col = 3 MSFlexGrid2.Text = "" MSFlexGrid2.Col = 4 MSFlexGrid2.Text = "" Else MSFlexGrid2.Rows = MSFlexGrid2.Rows - 1 End If Call DrawDotsagain_2 End Sub Private Sub Command8_Click() On Error Resume Next CommonDialog1.Action = 1 temp = CommonDialog1.filename Picture6.Picture = LoadPicture(temp) Label4.Caption = temp RP2 = InputBox("Enter resolution of your photo from scanner", "Photo Resolution ", 200) S2 = InputBox("Enter Scale of your photo from scanner or if you made a resample to the photo", "Photo Scale", 100) Picture6.Move 0, 0 MSFlexGrid2.Cols = 5

Appendix

147

MSFlexGrid2.Rows = 1 MySite2 = 0 For i = 0 To 2 MSFlexGrid2.Col = i MSFlexGrid2.Text = "" Next Picture6.Move 0, 0 If Picture6.Height > Picture4.Height - HScroll2.Height Then VScroll2.Enabled = True VScroll2.Max = (Picture6.Height - Picture4.Height + HScroll2.Height + 30) / 30 Else VScroll2.Enabled = False End If If Picture6.Width > Picture4.Width - VScroll2.Width Then HScroll2.Enabled = True HScroll2.Max = (Picture6.Width - Picture4.Width + VScroll2.Width + 30) / 30 Else HScroll2.Enabled = False End If Picture5.Move VScroll2.Left, HScroll2.Top End Sub Private Sub Command9_Click() p1 = Text1.Text On Error Resume Next Open "C:\image1.txt" For Output As #1 For i = 0 To MSFlexGrid1.Rows - 1 MSFlexGrid1.Row = i MSFlexGrid1.Col = 0 t = MSFlexGrid1.Text MSFlexGrid1.Col = 3 t = t & " " & MSFlexGrid1.Text MSFlexGrid1.Col = 4 t = t & " " & MSFlexGrid1.Text Print #1, p1; " "; t Next Close p2 = Text2.Text On Error Resume Next Open "C:\image2.txt" For Output As #2

Appendix

For i = 0 To MSFlexGrid2.Rows - 1 MSFlexGrid2.Row = i MSFlexGrid2.Col = 0 t = MSFlexGrid2.Text MSFlexGrid2.Col = 3 t = t & " " & MSFlexGrid2.Text MSFlexGrid2.Col = 4 t = t & " " & MSFlexGrid2.Text Print #2, p2; " "; t Next Close On Error Resume Next Dim tt CommonDialog1.Action = 2 dd$ = CommonDialog1.filename Close Open "c:\image1.txt " For Input As #1 Open "c:\image2.txt " For Input As #2 Open dd$ For Output As #3 Do Line Input #1, tt Print #3, tt Loop Until EOF(1) Do Line Input #2, tt Print #3, tt Loop Until EOF(2) Close End Sub Private Sub Form_Activate() Factorx = 1 FactorY = 1 Private Sub Form_Load() flagg = 0 RH = 96 RP1 = 200 RP2 = 200 S1 = 100 S2 = 100

148

Appendix

149

Me.Width = Screen.Width Me.Height = Screen.Height Me.WindowState = 2 Picture1.Width = 44 * (Me.Width / 100) Picture4.Width = Picture1.Width Picture1.Left = Me.Width / 25 Picture4.Left = 52 * (Me.Width / 100) Picture11.Left = Me.Width / 2 - Picture11.Width / 2 Picture1.Height = 55 * (Me.Height / 100) Picture4.Height = Picture1.Height Picture1.Top = 6 * (Me.Height / 100) Picture4.Top = 6 * (Me.Height / 100) Picture9.Top = 62.5 * (Me.Height / 100) Picture9.Height = 5 * (Me.Height / 100) MSFlexGrid1.Height = 15 * (Me.Height / 100) Picture7.Height = 5 * (Me.Height / 100) Picture11.Height = 5 * (Me.Height / 100) MSFlexGrid1.Top = 67.5 * (Me.Height / 100) Picture7.Top = 82.5 * (Me.Height / 100) Picture11.Top = 87.5 * (Me.Height / 100) Label3.Width = 20 * (Me.Width / 100) Label4.Width = Label3.Width Label3.Height = 5 * (Me.Height / 100) Label4.Height = Label3.Height Label3.Left = Picture1.Left + Picture1.Width / 2 - Label3.Width / 2 Label4.Left = Picture4.Left + Picture4.Width / 2 - Label4.Width / 2 Label3.Top = 0 Label4.Top = Label3.Top Check3.Top = Label3.Top Check3.Left = 0.41 * Me.Width Check4.Top = 0.01 * Me.Height Check4.Left = 0.01 * Me.Width Picture2.Width = Picture1.Width - 350 Picture6.Width = Picture2.Width VScroll2.Height = VScroll1.Height VScroll2.Width = VScroll1.Width HScroll2.Height = HScroll1.Height HScroll2.Width = HScroll1.Width VScroll1.Height = Picture1.Height - VScroll1.Width - 30

Appendix

150

VScroll1.Left = Picture1.Width - VScroll1.Width - 30 HScroll1.Width = Picture1.Width - HScroll1.Height - 30 HScroll1.Top = Picture1.Height - HScroll1.Height - 30 Picture3.Move VScroll1.Left, HScroll1.Top VScroll2.Height = Picture4.Height - VScroll2.Width - 30 VScroll2.Left = Picture4.Width - VScroll2.Width - 30 HScroll2.Width = Picture4.Width - HScroll2.Height - 30 HScroll2.Top = Picture4.Height - HScroll2.Height - 30 Picture5.Move VScroll2.Left, HScroll2.Top MSFlexGrid1.Left = Picture1.Left MSFlexGrid1.Width = Picture1.Width MSFlexGrid2.Left = Picture4.Left MSFlexGrid2.Width = Picture4.Width Picture7.Left = Picture1.Left Picture8.Left = Picture4.Left Picture9.Left = Picture1.Left Picture10.Left = Picture4.Left Picture4.Top = Picture1.Top Picture4.Height = Picture1.Height Picture8.Top = Picture7.Top Picture8.Height = Picture7.Height MSFlexGrid2.Top = MSFlexGrid1.Top MSFlexGrid2.Height = MSFlexGrid1.Height Picture10.Top = Picture9.Top Picture10.Height = Picture9.Height Text1.Top = Picture11.Top Text2.Top = Text1.Top Command11.Top = Text1.Top Text1.Left = 0.15 * Me.Width Text2.Left = 0.8 * Me.Width Command11.Left = 0.9 * Me.Width End Sub Private Sub HScroll1_Change() On Error Resume Next Picture2.Left = -HScroll1.Value * 30 End Sub Private Sub HScroll1_Scroll() On Error Resume Next Picture2.Left = -HScroll1.Value * 30

Appendix

151

End Sub Private Sub HScroll2_Change() On Error Resume Next Picture6.Left = -HScroll2.Value * 30 End Sub Private Sub HScroll2_Scroll() On Error Resume Next Picture6.Left = -HScroll2.Value * 30 End Sub Private Sub Picture2_MouseDown(Button As Integer, Shift As Integer, X As Single, Y As Single) If Check1.Value = 1 And Button = 1 Then MySite = MySite + 1 If Check4.Value = 0 Then Picture2.DrawWidth = 1 Picture2.PSet (X, Y) Picture2.Circle (X, Y), 2 * 15 Picture2.Print MySite Else: End If Picture2.CurrentX = X Picture2.CurrentY = Y MSFlexGrid1.Rows = MySite MSFlexGrid1.Row = MySite - 1 MSFlexGrid1.Col = 0 MSFlexGrid1.Text = MySite MSFlexGrid1.Col = 1 MSFlexGrid1.Text = X MSFlexGrid1.Col = 2 MSFlexGrid1.Text = Y MSFlexGrid1.Col = 3 MSFlexGrid1.Text = fxl MSFlexGrid1.Col = 4 MSFlexGrid1.Text = fyl Else X1 = X Y1 = Y End If End Sub

Appendix

152

Private Sub Picture2_MouseMove(Button As Integer, Shift As Integer, X As Single, Y As Single) If Check1.Value 1 And Button = 1 Then Picture2.Move Picture2.Left - (X1 - X), Picture2.Top - (Y1 - Y) If Check3.Value = 1 Then Picture6.Top = Picture2.Top Picture6.Left = Picture2.Left End If End If End Sub Private Sub Picture3_Click() Picture2.Move 0, 0 HScroll1.Value = 0 VScroll1.Value = 0 End Sub Private Sub Picture5_Click() Picture6.Move 0, 0 HScroll2.Value = 0 VScroll2.Value = 0 End Sub Private Sub Picture6_MouseDown(Button As Integer, Shift As Integer, X As Single, Y As Single) If Check2.Value = 1 And Button = 1 Then MySite2 = MySite2 + 1 If Check4.Value = 0 Then Picture6.DrawWidth = 1 Picture6.PSet (X, Y) Picture6.Circle (X, Y), 2 * 15 Picture6.Print MySite2 Else: End If Picture6.CurrentX = X '+ 1 * 15 Picture6.CurrentY = Y '+ 1 * 15 MSFlexGrid2.Rows = MySite2 MSFlexGrid2.Row = MySite2 - 1 MSFlexGrid2.Col = 0 MSFlexGrid2.Text = MySite2 MSFlexGrid2.Col = 1 MSFlexGrid2.Text = X MSFlexGrid2.Col = 2

Appendix

153

MSFlexGrid2.Text = Y MSFlexGrid2.Col = 3 MSFlexGrid2.Text = fxr MSFlexGrid2.Col = 4 MSFlexGrid2.Text = fyr Else X1 = X Y1 = Y End If End Sub Private Sub Picture6_MouseMove(Button As Integer, Shift As Integer, X As Single, Y As Single) If Check2.Value 1 And Button = 1 Then Picture6.Move Picture6.Left - (X1 - X), Picture6.Top - (Y1 - Y) If Check3.Value = 1 Then Picture2.Top = Picture6.Top Picture2.Left = Picture6.Left End If End If End Sub Private Sub VScroll1_Change() Picture2.Top = -VScroll1.Value * 30 End Sub Private Sub VScroll1_Scroll() Picture2.Top = -VScroll1.Value * 30 End Sub Private Sub VScroll2_Change() Picture6.Top = -VScroll2.Value * 30 End Sub Private Sub VScroll2_Scroll() Picture6.Top = -VScroll2.Value * 30 End Sub

Appendix

154

List of the Base Line Program: ---------------------------------------Dim v, m, AZ1, AZ2, B1, B2, XAL, XAR, YAL, YAR, ZAL, ZAR, ZA, B11, B21, E1, E2, L, A12, A21 As Single Dim ro, P As String Dim sle, t, r, w As String Dim FLAG1 As Integer Private Sub Command1_Click() Text1.Text = P v = 4 * Atn(1) X1 = X1.Text Y1 = Y1.Text Z1 = Z1.Text X2 = X2.Text Y2 = Y2.Text Z2 = Z2.Text H1 = H1.Text H2 = H2.Text c1# = ((V1D.Text) + (V1M.Text) / 60 + (V1S.Text) / 3600) * (v / 180) c2# = ((V2D.Text) + (V2M.Text) / 60 + (V2S.Text) / 3600) * (v / 180) B11 = ((H1D.Text) + (H1M.Text) / 60 + (H1S.Text) / 3600) B21 = ((H2D.Text) + (H2M.Text) / 60 + (H2S.Text) / 3600) B1 = B11 * (v / 180) B2 = B21 * (v / 180) E1 = Val(X2) - Val(X1) E2 = Val(Y2) - Val(Y1) L = ((E1) ^ 2 + (E2) ^ 2) ^ (0.5) If E2 = 0 Then If E1 > 0 Then A12 = 90 Else A12 = 270 End If Else A12 = (Atn((Abs(E1)) / (Abs(E2)))) * (180 / v) If (E1) > 0 Then If (E2) > 0 Then A12 = A12 Else

Appendix

155

A12 = 180 - A12 End If Else If (E2) > 0 Then A12 = 360 - A12 Else A12 = 180 + A12 End If End If End If If A12 > 180 Then A21 = A12 - 180 Else A21 = A12 + 180 End If AZ2 = (A12 - B11) AZ1 = (A12 - B11) * (v / 180) m = (B11 + B21) * (v / 180) s = (L * Sin(B2)) / Sin(m) XAL = X1 + s * Sin(AZ1) YAL = Y1 + s * Cos(AZ1) ZAL = ((Val(Z1) + Val(H1) + s * Tan(c1#)) + (Val(Z2) + Val(H2) + s * Tan(c2#))) / 2 XA.Caption = XAL YA.Caption = YAL ZAA.Caption = ZAL On Error Resume Next If FLAG1 = 0 Then r = InputBox("Enter name of your output file", "OPEN") Open r For Append As #1 Print #1, "" Print #1, " ******Results******" Print #1, " ---------------------------------------------------------------" Print #1, "P.Name X - Coordinate Y - Coordinate Z - Coordinate" Print #1, " ---------------------------------------------------------------" Print #1, P; Tab(12); XAL; Tab(34); YAL; Tab(54); ZAL Print #1, " ---------------------------------------------------------------" FLAG1 = 1 Else

Appendix

156

Print #1, P; Tab(12); XAL; Tab(34); YAL; Tab(54); ZAL Print #1, " ---------------------------------------------------------------" End If s = MsgBox("OK Done.. any time. Eng.Ahmed Abdel Hafeez Ahmed", , "Welcome in Eng.Ahmed programs") End Sub Private Sub Command2_Click() FLAG1 = 0 cd.DialogTitle = "Angle's file name" cd.Filter = "all files (*.*)|*.*|" cd.Action = 1 P = cd.filename Open P For Input As #2 Do Until EOF(2) v = 4 * Atn(1) Input #2, P, B11, c1#, B21 Text1.Text = P v = 4 * Atn(1) GoTo 2 1 b = (x - Int(x)) * 100 c = (b - Int(b)) * 100 x = Int(x) + Int(b) / 60 + c / 3600 Return 2 x = B11 GoSub 1 B11 = 360 - x x = c1# GoSub 1 c1# = (90 - x) * (v / 180) x = B21 GoSub 1 B21 = x X1 = X1.Text Y1 = Y1.Text Z1 = Z1.Text X2 = X2.Text Y2 = Y2.Text Z2 = Z2.Text H1 = H1.Text

Appendix

157

H2 = H2.Text B1 = B11 * (v / 180) B2 = B21 * (v / 180) E1 = Val(X2) - Val(X1) E2 = Val(Y2) - Val(Y1) L = ((E1) ^ 2 + (E2) ^ 2) ^ (0.5) If E2 = 0 Then If E1 > 0 Then A12 = 90 Else A12 = 270 End If Else A12 = (Atn((Abs(E1)) / (Abs(E2)))) * (180 / v) If (E1) > 0 Then If (E2) > 0 Then A12 = A12 Else A12 = 180 - A12 End If Else If (E2) > 0 Then A12 = 360 - A12 Else A12 = 180 + A12 End If End If End If If A12 > 180 Then A21 = A12 - 180 Else A21 = A12 + 180 End If AZ2 = (A12 - B11) AZ1 = (A12 - B11) * (v / 180) m = (B11 + B21) * (v / 180) m = Int(m * 100000000 + 0.5) / 100000000 sl = (L * Sin(B2)) / Sin(m) sr = (L * Sin(B1)) / Sin(m)

Appendix

158

XAL = X1 + sl * Sin(AZ1) XAL = Int(XAL * 10000 + 0.5) / 10000 YAL = Y1 - sl * Cos(AZ1) YAL = Int(YAL * 10000 + 0.5) / 10000 ZAL = (Val(Z1) + Val(H1) + sl * Tan(c1#)) ZAL = Int(ZAL * 10000 + 0.5) / 10000 XA.Caption = XAL YA.Caption = YAL ZAA.Caption = ZAL On Error Resume Next If FLAG1 = 0 Then cd.DialogTitle = "Enter name of your output file" cd.Filter = "all files (*.*)|*.*|" cd.Action = 1 r = cd.filename Open r For Append As #1 Print #1, "" Print #1, " ******Results******" Print #1, " ---------------------------------------------------------------" Print #1, "P.Name X - Coordinate Y - Coordinate Z - Coordinate" Print #1, " ---------------------------------------------------------------" Print #1, P; Tab(12); XAL; Tab(34); ZAL; Tab(54); YAL FLAG1 = 1 Else Print #1, P; Tab(12); XAL; Tab(34); ZAL; Tab(54); YAL End If Loop Close #2 Close #1 FLAG1 = 0 s = MsgBox("OK Done.. any time. Eng.Ahmed Abdel Hafeez Ahmed", , "Welcome in Eng.Ahmed programs") End Sub Private Sub Command3_Click() On Error Resume Next Close #1 End End Sub Private Sub Command4_Click()

Appendix

159

On Error Resume Next s = InputBox("Is your data contain vertical angles from left station(w) or right station (R) or together (T)", "OPTION") sle = s End Sub Private Sub Label10_DblClick() cd.DialogTitle = "point's file name" cd.Filter = "all files (*.*)|*.*|" cd.Action = 1 a$ = cd.filename Open a$ For Input As #2 Input #2, g#, e#, f#, g7# Close #2 Label10.Caption = a$ X1.Text = g# Y1.Text = e# Z1.Text = f# H1.Text = g7# End Sub Private Sub Label11_DblClick() cd.DialogTitle = "point's file name" cd.Filter = "all files (*.*)|*.*|" cd.Action = 1 a$ = cd.filename Open a$ For Input As #2 Input #2, g#, e#, f#, g14# Close #2 Label11.Caption = a$ X2.Text = g# Y2.Text = e# Z2.Text = f# H2.Text = g14# End Sub Private Sub Label18_DblClick() cd.DialogTitle = "Angle's file name" cd.Filter = "all files (*.*)|*.*|" cd.Action = 1 P = cd.filename Open P For Input As #2

Appendix

160

Input #2, P, g1#, g2#, g3#, g4#, g5#, g6#, g8#, g9#, g10#, g11#, g12#, g13# Close #2 Text1.Text = P H1D.Text = g1# H1M.Text= g2# H1S.Text = g3# V1D.Text = g4# V1M.Text = g5# V1S.Text = g6# H2D.Text = g8# H2M.Text = g9# H2S.Text = g10# V2D.Text = g11# V2M.Text = g12# V2S.Text = g13# End Sub