A Study on Construction Defect Management Using Augmented

0 downloads 0 Views 4MB Size Report
34-47, 2001. [2] S.H. Lee, S.K. Lee and J.S. Choi, ”Real-time camera tracking using a ... [14] G. Bradski, A. Kaehler, ”Learning Opencv: Computer Vision with the.
A Study on Construction Defect Management Using Augmented Reality Technology Jae-Young Lee Jong-Soo Choi

Oh-Seong Kwon Chan-Sik Park

The Graduate School of Advanced Imaging Science, Multimedia and Film, Chung-Ang University, Seoul, Korea [email protected] [email protected]

School of Architecture and Building Science, Chung-Ang University, Seoul, Korea [email protected] [email protected]

Abstract—In this paper, we propose a construction defect management system using augmented reality (AR) technology that augments virtual objects to input images in real time using a single camera. The proposed method checks construction error using an AR technique that extracts feature points using a marker from three-dimensional scenes projected as images and tracks them to estimate the camera position information. We discovered that this method enables efficient defect management on construction sites.

I. I NTRODUCTION Augmented Reality(AR) is a computer technology that combines, in real time, virtual objects and real world images that are inputted through a camera to provide users with a new type of computing environment[1]. Various AR-based techniques are being introduced for application in intuitive perceptual interfaces and ubiquitous computing. The typical process for implementing AR technology is as follows. First, objects are detected and the camera position information is estimated through the detected objects. Next, the locations where virtual objects will be registered are predicted using the estimated information. Lastly, the real world images are combined with the virtual images. The key element in this process is calculating the camera position information using the images captured from a camera and synthesizing virtual objects using the position information[2][3][4].

Fig. 2. Virtual facility management data overlaid onto a real indoor setting Source: Kensek et al. 2000

Recently, studies were conducted in the construction industry to improve construction defect management using visualization and mobile computing technologies. Furthermore, studies on defect management systems using RFID, PDA, and laser scanners have presented methods that can reduce the work volume of construction site managers. However, these studies focused more on the actions taken after the occurrence of defects rather than on defect prevention. The present study suggests a method to efficiently prevent defects using AR technology[5][6][7]. As for the structure of this paper, Chapter II describes the camera model and the use of AR technology for the proposed system. Chapter III explains the application of AR to construction site defect management. Chapter IV presents the experiments and assessment. Finally, Chapter V discusses the experimental results and future studies. II. C OMPUTER V ISION T ECHNOLOGY A. Frame Differential Technique

Fig. 1. Visualization of simulated steel erection processes in outdoor augmented reality(reprinted with permission from Behzadan and Kamat 2009).

The frame differential technique, based on image processing, is used to test the accurate location of objects using two images captured with a camera. It detects the changed parts using the differences in pixels between the two frames. This method is not difficult and it is appropriate for real-time use at construction sites.

978-1-4673-1401-5/12/$31.00 ©2012 IEEE

Fig. 3. camera.

Geometric relationship between features and an image taken by

Fig. 4.

Projective geometry of input image sequences.

B. Detection and Tracking of Feature Points As the features in the captured images must be detected in the same background image in each frame, it is generally more useful to detect corner points that have a large differential coefficient because brightness changes sharply. The proposed algorithm uses the Shi-Tomasi[10] method to detect feature points that are useful for tracking, such as corner points. The most widely used Harris corner detection method uses the second derivatives of the image brightness. A pixel in the image is regarded as a corner point if all the eigenvalues of a hessian matrix consisting of second derivatives at the pixel are large. As second derivative images do not have values at uniform gradient, they are useful for detecting corners. Meanwhile, the Shi-Tomasi corner detection method determines that a pixel is a good object for tracking if the smaller of the two eigenvalues of the hessian matrix is greater than the specified critical value. In order to extract the geometric measurements of the detected feature point, real number-type coordinates must be used instead of whole number-type coordinates. When peaks in an image are searched, they are rarely located at the center of the pixels. To detect them, sub-pixel corner detection[11] is used. The coordinates of peaks that exist between pixels are found by fitting with curves, such as parabolas. The Lukas-Kanade Tracker(LKT)[12] is used to track the

detected feature points. This is a sparse optical flow method that only uses the local information obtained from a small window that covers predefined pixels; the points to be tracked are specified beforehand. LKT is based on three assumptions: brightness constancy, temporal persistence, and spatial coherence. Brightness constancy assumes that the brightness values of specific continuous pixels are constant in frames of different times. Thus, the partial differential value for the time axis tt in expression (1) is 0. By tracking the areas that have the same brightness value in this way, the speed between consecutive frames can be calculated. Ix , Iy , and It are the partial differentiation results of x, y, and time axes, respectively. u and v are the coordinate changes in the x and y axes, respectively. Temporal persistence assumes that the pixels around a specific pixel that has a movement change over time will have consistent changes of coordinates. As two variables (u and v) cannot be calculated with one function in expression (1), it is assumed that 25 neighbor pixels have the same change. Then the values of u and v can be calculated using the least square method, as shown in expression (2). The computation speed of LKT is fast because it uses a small local window area under the assumption of temporal persistence that coordinate changes are not large compared to time changes. However, its disadvantage is that it cannot calculate movements larger than the small local window. To address this problem, the Gaussian image pyramid is used. A Gaussian image pyramid is created from the original image, tracking starts from a small stratum, and tracking changes are gradually accumulated to larger strata. Thus, the changes of sharp coordinates can be detected even if a window of limited size is used.[13][14] C. Calculation of Camera Position Information Three-dimensional object coordinates are calculated by back-projection of two-dimensional(2D) coordinates to a three-dimensional(3D) space. The proposed system is preceded by an initialization step in which the 2-D coordinates of the feature points detected from the captured images are converted to the coordinates of a 3D space. As the coordinates detected from the camera input images are 2D, it is assumed that the depth(z-axis) is zero when they are converted to a 3D space. One problem is that if the detected feature pixels are not

located on the same plane, errors may be generated. Because of this, the proposed method requires the precondition that it is carried out for relatively flat background images, such as desk top images. Then, the camera position information is calculated through the relationship of the 2D coordinates of the feature points acquired from consecutive frames and the 3D coordinates acquired from the initialization step. As shown in Figure 4, assuming that the homogeneous coordinates of the 3D coordinates M = (X, Y, Z) and the 2D image coordinates ¯ = [XY Z1]T and m ¯ = [XY Z1]T , the m = (x, y) are M projection relationship between them are defined as expression (3) by the 3X4 camera matrix . ¯ and R is λ is the scale variable of the projection matrix P, the 3X3 matrix by the rotational displacement of the camera. Furthermore, r is the Ith column of the matrix R, and t is the 3X1 translation vector that signifies the camera movement. In addition, the 3X3 matrix K is a non-singular matrix, indicating the camera correction matrix that has the intrinsic parameters of the camera as its elements, and it is generally defined as follows. In the matrix of expression (4), fx and fy are the scale values in the coordinate axis directions of the image, and s is the skew parameter of the image. Furthermore, (x0 , y0 ) is the principal point of the image. To obtain the camera matrix in expression (4), a separate camera correction is generally needed. In this study, the camera matrix was obtained using the camera correction method proposed by Zhang[15]. The

Fig. 5.

A Block diagram of the proposed defect management process.

Fig. 6.

A Block diagram of the proposed system.(Vision Technology)

camera correction method of Zhang calculates the Image of Absolute Conic (IAC) ω, which is projected to the images using the invariance of isometric transformation, which is one of the characteristics of absolute points on an infinite plane. Then the intrinsic parameter matrix of the camera is obtained from the relationship ω −1 = KKT . Therefore, in order to implement Zhangs method, three or more images for the same plane, which have different directions and locations, are required[16][17][18]. III. P ROPOSED C ONSTRUCTION M ANAGEMENT M ETHOD With the recent development of visualization and mobile technologies, AR is being spotlighted in the construction industry as a tool for identifying virtual information at the site. In particular, defect management information, which is transmitted through AR, is important for defect prevention. Defect management means management to prevent defects, even before construction, by finding the root causes of the defects. As mentioned above, many studies on construction defect management using advanced technologies, such as RFID, PDA, and laser scanners, are currently being conducted. Presently, construction site managers identify construction errors using such documents as checklists or punch lists and design drawings to recognize and manage defects. These studies on the latest technologies have tried to improve the current work processes. However, they only focus on improving the work processes regarding the follow-up measures for defects that have already occurred. The purpose of defect management is not only to find the defects but also to find their causes and prevent their recurrence. Building information modeling(BIM) is also gaining much interest in the construction industry. BIM expands the information in conventional 2D drawings to 3D drawings. BIM can improve the degree of completion by simulating the entire process of a construction project including planning, design, construction, and maintenance. Furthermore, BIM software combines construction, information engineering,

Fig. 7.

Scenario of field inspection using Image-matching. Fig. 8.

and computer graphic technologies. The following describes a technology to input defect management information into BIM software and visualize the information using marker-based AR. A. Image Matching Technique Based on Image Processing An image matching technique based on image processing can be used in the construction defect management process as described below. 1) The defect manager acquires 2D information of the location in the BIM software. 2) The acquired information is saved as a 2D virtual image file in the image matching program. 3) The defect manager conveys to the worker the camera position information, such as height, angle of view, and distance to the worker. 4) After finishing the work, the worker uses a camera to take a picture of the completed work at the conveyed position. 5) The worker sends the work picture that he took to the image matching program. 6) The image matching program compares the 2D image saved by the manager and the actual work picture sent by the worker, checks construction errors, and conveys them to the site inspector. 7) The site inspector checks the resulting images and evaluates the work progress or makes a decision for rework.

Scenario of field inspection using marker based AR.

4) The worker attaches the marker to the specified position and checks the task to be performed on a PC. At the actual work site, the field inspector checks the construction state in augmented images using the marker for the completed task. 5) After completing the task, the worker takes images of the task with a camera and sends them to the defect manager and field inspector for review. 6) The defect manager and inspector check the augmented images. If any errors are found, they instruct the field worker to stop working or to rework the task. IV. E XPERIMENTAL R ESULTS To assess the defect management method using image matching and augmented reality technologies, a miniature model was produced and compared with the BIM model. The experiment was conducted on a PC with an Intel Core i5 CPU, 2.8Hz, and 4GB RAM. Figure 10(a) is the image of the virtual site using the BIM model information, and 10(b) is the site image using the miniature model. The assessment was carried out using these two pieces of information.

B. Augmented Reality Technology 1) The defect manager extracts information, such as shape, materials, and process, from the BIM model that contains the defect management information. 2) The defect manager inputs the extracted information into the predefined marker information. 3) The defect manager conveys the created marker and attachment position information to the site inspector and worker.

Fig. 9.

lab-test system overview.

(a)

(b)

Fig. 10. 3D BIM model and mock-up model for lab-test: (a) virtual BIM model, (b) mock-up model.

Fig. 12.

(a)

Image-matching test result.

(b)

(c) Fig. 11. Frame differential technique: (a) BIM model image, (b) mock-up model image, (c) result image.

(a) A. Assessment of Defect Management Using Image Matching The user determines the correct matching of the 2D image of the virtual site acquired from the BIM program and the construction objects using the actual 2D image acquired by a worker from the site using the frame differential technique mentioned above. In the figure, 11(a) is the virtual image and 11(b) is the image acquired from the site. Applying the frame differential technique to these two images gives the result in 11(c). The most changed part of the image in 11(c) is the right side of the window. The position of the door and window can be determined using this information, as shown in the figure below. B. Assessment of Defect Management Using AR AR inserts virtual objects in an actual environment and shows them to users. For defect management using AR, we performed an experiment with existing pipes on a wall. When a marker was attached to the specified position in the miniature model and AR was applied, three pipes appeared at the desired position. However, only two pipes appeared in the miniature model; this means that there was a defect. We conducted experiments using computer vision-based algorithms for construction site defect management using the virtual images of the BIM model and a miniature model of the real world. These

(b) Fig. 13. image.

Mock-up model AR Lab-test result: (a) mock-up model, (b) result

experiments confirmed that defect management using AR can be applied to actual construction sites. V. C ONCLUSION In this paper, we proposed a construction defect management system using image-based AR technology. Until now, most buildings have been constructed using 2D design drawings created with CAD. There are many difficulties with

Fig. 14.

AR Lab-test result.

this method in the construction of atypical modern buildings. However, AR technology can improve workers understanding of defects by allowing them to see 3D virtual objects on a display unit at the site. As the application of AR improves a workers understanding of his tasks, it can be used for the management of defects, processes, and safety. A miniature model was used for the experiments in this study. Future tests will be performed at actual construction sites, and as the use of a PC has some limitations, the application of AR to mobile devices will be studied. ACKNOWLEDGMENT This work was supported by Korean Research Foundation under BK21 project and the National Research Foundation of Korea(NRF) grant funded by the Korea government(MEST) (No.2011-0016501). R EFERENCES [1] R. Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier and B. Macintyre, ”Recent Advances in Augmented Reality,” IEEE Computer Graphic and Applications, pp. 34-47, 2001. [2] S.H. Lee, S.K. Lee and J.S. Choi, ”Real-time camera tracking using a particle filter and multiple feature trackers,” Games Innovations Conference, 2009. ICE-GIC 2009. International IEEE Consumer Electronics Society’s, pp. 29-36, Aug. 2009. [3] Peter Keitler, ”Mobile Augmented Reality based 3D Snapshots,” in Proc. IEEE and ACM International Symposiumon Mixedand Augmented Reality, pp. 199-200, 2009. [4] J.Y. Lee, H.M. Park, S.H. Lee, S.H. Shin, T.E. Kim and J.S. Choi, ”Design and implementation of an augmented reality system using gaze interaction,” Multimedia Tools and Applications, Online First, Dec. 2011. [5] J.I. Lee, A Study on Application of Web-Based PDA System by an Analysis of Construction Confirmation Work Process in Public Apartment Project, Hanyang university, Seoul, Korea (2010) [6] G.S. Oh, J.H. Song, A study on the Construction Materials Management using RFID, Journal of The Korea Academia-Industrial Cooperation Society 9 (6) (2010) 103-111 [7] L.C. Wang, Enhancing construction quality inspection and management using RFID technology, Automation in construction 17 (4) (2008) 467479 [8] Y.S. Kim, S.W. Oh, Y.K. Cho, J.W. Seo, A PDA and wireless webintegrated system for quality inspection and defect management of apartment housing projects, Automation in construction 17 (2) (2008) 163-179.

[9] S.N. Yu, J.H. Jang, C.S. Han, Auto inspection system using a mobile robot for detecting concrete cracks in a tunnel, Automation in Construction 16 ( 3) (2007) 255-261. [10] J. Shi and C. Tomasi, ”Good features to track,” IEEE Proc. Conf Computer Visionand Pattern Recognition, pp. 293-600, Jun. 1994. [11] Dazhi Chen, Guangjun Zhang, ”A New Sub-Pixel Detector for XCorners in Camera Calibration Targets,” International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, 2005. [12] J.Y. Bouguet, ”Pyramidal Implementation of the Lucas Kanade Feature Tracker,” Intel Corporation, Microprocessor ResearchLabs, 2000. [13] J. Barron, N. Thacker, ”Computing 2D and 3D Optical Flow,” TinaVision, 2005. [14] G. Bradski, A. Kaehler, ”Learning Opencv: Computer Vision with the Opencv Library,” O‘REILLY, 2008. [15] Z. Zhang, ”Flexible New Technique for Camera Calibration,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 1330-1334, Nov. 2000. [16] S.H. Lee J.S. Choi, ”Estimation of Human Height and Position using a Single Camera,” Journal o fThe Institute of Electronics Engineers of Korea, Vol. 45, No. 3, pp. 20-31, Aug, 2008. [17] R. Hartley and A. Zisserman, ”Multiple View Geometry in Computer Vision,” Cambridge Univ. Press, 2003. [18] O. Faugeras, ”Three-Dimensional Computer Vision,” The MIT Press, 1993.