Automation in Constructionlive.engin.umich.edu/wp-content/uploads/sites/430/2019/...Automation and...

11
Vision guided autonomous robotic assembly and as-built scanning on unstructured construction sites Chen Feng a, , Yong Xiao a , Aaron Willette b , Wes McGee b , Vineet R. Kamat a a Department of Civil and Environmental Engineering, University of MI, Ann Arbor, USA b College of Architecture and Urban Planning, University of MI, Ann Arbor, USA abstract article info Article history: Received 5 November 2014 Received in revised form 10 April 2015 Accepted 2 June 2015 Available online 2 July 2015 Keywords: On-site construction robotics Autonomous assembly Pose estimation As-built 3D modeling Unlike robotics in the manufacturing industry, on-site construction robotics has to consider and address two unique challenges: 1) the rugged, evolving, and unstructured environment of typical work sites; and 2) the re- versed spatial relationship between the product and the manipulator, i.e., the manipulator has to travel to and localize itself at the work face, rather than a partially complete product arriving at an anchored manipulator. The presented research designed and implemented algorithms that address these challenges and enable auton- omous robotic assembly of freeform modular structures on construction sites. Building on the authors' previous work in computer-vision-based pose estimation, the designed algorithms enable a mobile robotic manipulator to: 1) autonomously identify and grasp prismatic building components (e.g., bricks, blocks) that are typically non-unique and arbitrarily stored on-site; and 2) assemble these components into pre-designed modular struc- tures. The algorithms use a single camera and a visual marker-based metrology to rapidly establish local refer- ence frames and to detect staged building components. Based on the design of the structure being assembled, the algorithms automatically determine the assembly sequence. Furthermore, if a 3D camera is mounted on the manipulator, 3D point clouds can be readily captured and registered into a same reference frame through our marker-based metrology and the manipulator's internal encoders, either after construction to facilitate as- built Building Information Model (BIM) generation, or during construction to document details of the progress. Implemented using a 7-axis KUKA KR100 robotic manipulator, the presented robotic system has successfully assembled various structures and created as-built 3D point cloud models autonomously, demonstrating the designed algorithms' effectiveness in autonomous on-site construction robotics applications. © 2015 Elsevier B.V. All rights reserved. 1. Introduction Several studies have argued that among all industries, construction has seen a signicant productivity decrease over the last several decades compared to other industries [30]. Construction has also been docu- mented to have some of the highest rates of workspace injuries and fa- talities [6]. Automation and robotics in construction (ARC) has the potential to relieve human workers from repetitive and dangerous tasks, and has been extensively promoted in the literature as a means of improving construction productivity and safety [3]. Compared to the tangible benets of automation and robotics iden- tied by the manufacturing industry, the construction industry is still exploring feasible and broadly deployable ARC applications [3]. This can be attributed to several commercial and technical challenges. From the commercial perspective, the fragmented and risk-averse nature of the construction industry leads to little investment in ARC research causing construction to lag behind other industries [31]. On the other hand, as described next, there are several technical complexi- ties inherent in construction that have contributed to hindering the suc- cessful development and widespread use of eld construction robots. 1.1. Technical challenges 1.1.1. Unstructured construction environments Automated and robotized manufacturing facilities are typically considered as structured environments, since both the machines and evolving products either stay in their predened locations or move on predesigned and typically xed paths. In general, such environments do not change shape or conguration during the performance of manufacturing tasks, making the enforcement of tight tolerances possi- ble [23]. In contrast, construction sites can typically be considered unstructured since they are constantly evolving, and dramatically changing shape and form in response to construction tasks. Building components are moved around without xed paths or laydown/staging areas. Various physical connections are established through improvisa- tion in response to in-situ conditions, making tight tolerances hard to maintain and enforce [24]. Automation in Construction 59 (2015) 128138 Corresponding author. Tel.: +1 734 546 9083. E-mail address: [email protected] (C. Feng). http://dx.doi.org/10.1016/j.autcon.2015.06.002 0926-5805/© 2015 Elsevier B.V. All rights reserved. Contents lists available at ScienceDirect Automation in Construction journal homepage: www.elsevier.com/locate/autcon

Transcript of Automation in Constructionlive.engin.umich.edu/wp-content/uploads/sites/430/2019/...Automation and...

Page 1: Automation in Constructionlive.engin.umich.edu/wp-content/uploads/sites/430/2019/...Automation and robotics in construction (ARC) has the potential to relieve human workers from repetitive

Automation in Construction 59 (2015) 128–138

Contents lists available at ScienceDirect

Automation in Construction

j ourna l homepage: www.e lsev ie r .com/ locate /autcon

Vision guided autonomous robotic assembly and as-built scanningon unstructured construction sites

Chen Feng a,⁎, Yong Xiao a, Aaron Willette b, Wes McGee b, Vineet R. Kamat a

a Department of Civil and Environmental Engineering, University of MI, Ann Arbor, USAb College of Architecture and Urban Planning, University of MI, Ann Arbor, USA

⁎ Corresponding author. Tel.: +1 734 546 9083.E-mail address: [email protected] (C. Feng).

http://dx.doi.org/10.1016/j.autcon.2015.06.0020926-5805/© 2015 Elsevier B.V. All rights reserved.

a b s t r a c t

a r t i c l e i n f o

Article history:Received 5 November 2014Received in revised form 10 April 2015Accepted 2 June 2015Available online 2 July 2015

Keywords:On-site construction roboticsAutonomous assemblyPose estimationAs-built 3D modeling

Unlike robotics in the manufacturing industry, on-site construction robotics has to consider and address twounique challenges: 1) the rugged, evolving, and unstructured environment of typical work sites; and 2) the re-versed spatial relationship between the product and the manipulator, i.e., the manipulator has to travel to andlocalize itself at the work face, rather than a partially complete product arriving at an anchored manipulator.The presented research designed and implemented algorithms that address these challenges and enable auton-omous robotic assembly of freeformmodular structures on construction sites. Building on the authors' previouswork in computer-vision-based pose estimation, the designed algorithms enable a mobile robotic manipulatorto: 1) autonomously identify and grasp prismatic building components (e.g., bricks, blocks) that are typicallynon-unique and arbitrarily stored on-site; and 2) assemble these components into pre-designed modular struc-tures. The algorithms use a single camera and a visual marker-based metrology to rapidly establish local refer-ence frames and to detect staged building components. Based on the design of the structure being assembled,the algorithms automatically determine the assembly sequence. Furthermore, if a 3D camera is mounted onthe manipulator, 3D point clouds can be readily captured and registered into a same reference frame throughour marker-based metrology and the manipulator's internal encoders, either after construction to facilitate as-built Building Information Model (BIM) generation, or during construction to document details of the progress.Implemented using a 7-axis KUKA KR100 robotic manipulator, the presented robotic system has successfullyassembled various structures and created as-built 3D point cloud models autonomously, demonstrating thedesigned algorithms' effectiveness in autonomous on-site construction robotics applications.

© 2015 Elsevier B.V. All rights reserved.

1. Introduction

Several studies have argued that among all industries, constructionhas seen a significant productivity decrease over the last several decadescompared to other industries [30]. Construction has also been docu-mented to have some of the highest rates of workspace injuries and fa-talities [6]. Automation and robotics in construction (ARC) has thepotential to relieve human workers from repetitive and dangeroustasks, and has been extensively promoted in the literature as a meansof improving construction productivity and safety [3].

Compared to the tangible benefits of automation and robotics iden-tified by the manufacturing industry, the construction industry is stillexploring feasible and broadly deployable ARC applications [3]. Thiscan be attributed to several commercial and technical challenges.From the commercial perspective, the fragmented and risk-aversenature of the construction industry leads to little investment in ARCresearch causing construction to lag behind other industries [31]. On

the other hand, as described next, there are several technical complexi-ties inherent in construction that have contributed to hindering the suc-cessful development and widespread use of field construction robots.

1.1. Technical challenges

1.1.1. Unstructured construction environmentsAutomated and robotized manufacturing facilities are typically

considered as structured environments, since both the machines andevolving products either stay in their predefined locations or move onpredesigned and typically fixed paths. In general, such environmentsdo not change shape or configuration during the performance ofmanufacturing tasks, making the enforcement of tight tolerances possi-ble [23]. In contrast, construction sites can typically be consideredunstructured since they are constantly evolving, and dramaticallychanging shape and form in response to construction tasks. Buildingcomponents are moved aroundwithout fixed paths or laydown/stagingareas. Various physical connections are established through improvisa-tion in response to in-situ conditions, making tight tolerances hard tomaintain and enforce [24].

Page 2: Automation in Constructionlive.engin.umich.edu/wp-content/uploads/sites/430/2019/...Automation and robotics in construction (ARC) has the potential to relieve human workers from repetitive

1 http://www.robotsinarchitecture.org/map-of-robots-in-architecture.

129C. Feng et al. / Automation in Construction 59 (2015) 128–138

1.1.2. Mobility of construction manipulatorsIn manufacturing, factory robotics typically involves robotic plat-

forms that are generally stationary (or have limited linear mobility)and partially complete products that arrive at robot workstations andprecisely localize themselves in the robots' base reference frames. Preci-sion is achieved by controlling the pose of the moving (and evolving)product, and the robots themselves are programmed to manipulatethe products throughfixed trajectories. Thus, fromamobility and cogni-tive perspective, a factory robot has little responsibility and autonomy.Control is achieved by enforcing tight tolerances inmoving and securingthe product in themanipulator's vicinity. However, this spatial relation-ship is reversed in construction. A construction robot has to travel to itsnext workface (or be manually set up there), perceive its environment,account for the lack of tight tolerances, and then performmanipulationactivities in that environment. This places a significant mobility andcognitive burden on a robot intended for construction tasks even if thetask itself is repetitive.

This discussion highlights that factory-style automation on construc-tion sites requires development of robots that are significantly moremobile and perceptive when compared to typical industrial robots.Such on-site construction robots have to be able to semantically senseand adjust to their unstructured surroundings and the resulting loosetolerances. This paper proposes a new high-accuracy 3Dmachine visionmetrology for mobile construction robots. The developed method usesfiducial markers to rapidly establish a local high-accuracy controlenvironment for autonomous robot manipulation on construction sites.Using this method, it is possible to rapidly convert a portion of a largeunstructured environment into a high-accuracy, controllable referenceframe that can allow a robot to operate autonomously.

The rest of the paper is organized as follows. Relatedwork is reviewedin Section 1.2. The authors' technical approach is discussed next in detailin Section 2. The experimental results of both assembly and scanning areshownanddiscussed in Section 3. Finally, in Section 4, the conclusions aredrawn and the authors' future work is summarized.

1.2. Previous work

1.2.1. Robotic manipulators in constructionThe construction community has pursued research on robotic manip-

ulators for several decades: for example, Slocum and Schena [33] pro-posed the Blockbot for automatic cement block wall construction;Pritschow et al. [29] identified the needs and requirements of a brick-laying robot for masonry construction and developed a control systemfor such robots.

A large portion of the construction robotic manipulator researchfocused on mechanics and control of specific construction activities.Fukuda et al. [13] discussed the mechanism and the control method ofa robotic manipulator in construction based on human–robot coopera-tion. Yu et al. [37] proposed an optimal brick laying pattern and trajectoryplanning algorithm for a mobile manipulator system, with computersimulation to test its efficiency. Hansson and Servin [17] developed asemi-autonomous shared control system of a large-scale manipulatorin unstructured environments, with a forwarder crane prototype to testits performance. Chung et al. [7] proposed a new spatial 3 degree-of-freedom (DOF) parallel typemaster device for glass window panel fittingtask. Gambao et al. [15] developed a modular flexible collaborative robotprototype formaterial handling, althoughwithout anyperception sensorsfor capturing the working environment.

Another important aspect of construction robotic manipulators liesin sensing and perception. Kahane and Rosenfeld [20] proposed a“sense-and-act” operation concept enabling an indoor mobile robot toposition itself with approximately 10 cm accuracy, using a CCD cameraand several laser projectors. Kim and Haas [21] proposed automaticinfrastructure inspection and maintenance using machine vision forcrack mapping. Gambao et al. [14] developed a robot assembly systembased on laser telemeter sensors of 6 to 15 mm positional precision.

During these research studies, it was generally realized that increasingthe level of autonomy for construction robots requires high accuracylocalization of the robot: from 3–5 cm indoor positional accuracy forcontactless construction tasks such as spray-painting, to 2–3 mm accu-racy for more precise tasks demanding direct contact betweenmanipu-lator and building components [32]. This requirement has posed asignificant challenge for ARC because even by using current state-of-the-art simultaneous localization and mapping (SLAM) techniques,such accuracy is hard to achieve at large scales [22]. In order to addressthis issue, the authors of this study chose to pursue computer-vision-based pose estimation algorithms that can achieve high accuracy locallyaround a visual marker [10,26].

1.2.2. 3D as-built modeling in construction3D as-built modeling (e.g., BIM) plays an important role in a wide

range of civil engineering applications. This modeling process usuallystarts with collecting 3D point clouds of sites of interest. Paul et al.[27] utilized a 6DOF anthropomorphic robotic arm to get the 3D map-ping of a complex steel bridge with a laser range scanner. Brilakiset al. [5] outlined the technical approach for automated as-built model-ing based on point clouds generated from hybrid video and laser scan-ning data. Akula et al. [2] explored different 3D imaging technologies,e.g., 3D image system, image based 3D reconstruction and CoherentLaser Radar scanner, to map the locations of rebar within a section of arailway bridge deck in order to assist a future drill operator in makingreal-time decisionswith visual feedback. Zhu and Donia [39] investigat-ed the advantages and drawbacks of RGBD cameras in as-built indoorenvironments modeling, with evaluation on the accuracy of collecteddata, the difficulty of automatic scan registration and the recognitionof building elements, demonstrating RGBD camera's potential in as-built BIM modeling. In this research, the automatic planning, scanningand registration of point clouds obtained from a 3D camera mountedon the manipulator are achieved with the visual marker-based metrol-ogy and the manipulator's internal encoders.

Once 3D point clouds are obtained, CAD-like geometric models canbe generated. For example, Son et al. [34] automatically extracted 3Dpipeline models from laser scanning data based on the curvature andnormal of point clouds; Han et al. [16] proposed an automated andefficient method to extract tunnel cross sections from terrestrial laserscan (TLS) data. While this research focuses more on the automaticregistration of different frames of point clouds, the resulting registeredpoint clouds could be input into such algorithms to generate semanti-cally meaningful CAD-like geometric entities for as-built BIM.

1.2.3. Robotic manipulators in architectureRecently the architectural design community has also shown an in-

creased interest in industrial robotics, with many academic programsinvesting in their own robotic work cells.1 Capitalizing on themachines'inherent flexibility, they have leveraged the industrial robot as a devel-opment platform for the exploration and refinement of novel produc-tion techniques in which material behavior is intrinsically linked tofabrication and assembly logics. As part of the general ecosystem ofindustrial robotics, computer vision systems have begun to play anincreasingly important role in these research initiatives, with a numberof architectural research groups developing interfaces for accessiblehardware such as the Microsoft Kinect.

Initially the majority of architectural robotic research utilizing com-puter vision has revolved around its application at themicro scale, usinga vision feedback system to make incremental adjustments to a roboticstrategy based upon local variations. Examples of this include Dierichset al.'s [8] research into poured aggregate structures at the Institute forComputational Design at University of Stuttgart and Dubor and Diaz's[9] project Magnetic Architecture from the Institute for Advanced

Page 3: Automation in Constructionlive.engin.umich.edu/wp-content/uploads/sites/430/2019/...Automation and robotics in construction (ARC) has the potential to relieve human workers from repetitive

130 C. Feng et al. / Automation in Construction 59 (2015) 128–138

Architecture of Catalonia, two robotics projects which use theMicrosoftKinect to provide information on local variations from the intendeddesign geometry, which is then used to generate incremental adjust-ments in succeeding operations. While beneficial as a means to adjustfor material variation and machine error, these implementations arenot robust enough for the in-situ robotics due to the complexities ofconstruction sites.

Acknowledging the limitations of processes developed within thesafety of the research laboratory, architects have slowly begun toexplore application of computer vision at the macro scale for adaptivepath planning. At the fore-front of this work has been the research con-ducted at the ETH Zurich led by Fabio Gramazio and Matthias Kohler.Heavily invested in the potential of construction site robotics, theirwork has included the development of hardware/software solutionsthat allow industrial robotics to dynamically adjust their operations atboth the micro and macro levels. This research is best exemplified intheir ECHORD project,2 in which an eight-meter-long module wallwas assembled by an ABB robot mounted to a mobile track system[19]. Constructing a stackedmodule wall along a gestural path capturedby the robot's computer vision system, this mobile robot also used thesame system to reposition itself on the construction site (expandingits operable reach envelope) and make local adjustments based ontopographic variation. Without using an extensive sensor suite like theone used in ECHORD, the robotic platform described in this papersuccessfully built similarmodule walls purely based on perceived infor-mation from a single camera; moreover, as-built 3D point clouds of theassembled outcome can be readily obtained if the robotic platform isequipped with a 3D camera on the manipulator, for purposes like as-built BIM generation or construction progress documentation.

3

2. Technical approach

2.1. System overview

The developed robotic in-situ assembly and scan system consist ofseveral major components, as shown in Fig. 1. The assembly workflowof the systemconsists of an offline design process and anonline buildingprocess. During the offline process, a designer models the intendedstructure in 3D, which is then analyzed and validated by the assemblyplaner, outputting an assembly plan for the online process. The onlineassembly process uses a fixed camera mounted on the base of therobot for providing images to the pose estimator to detect staged build-ing components and estimate their poses, and more importantly local-ize the robot itself in the local building reference frame. Havingcomputed this information, the plan achiever then sequentially trans-forms each step from the automatically generated assembly plan intoan executable command,which can be interpreted by the robot control-ler and subsequently executed by the robot manipulator. In addition,the visualization component also receives the information generatedby the pose estimator as well as the robot's real-time pose feedbackfrom the controller, to simultaneously represent the actual on-siteassembly process into a 3Dvirtual environment for improvedmonitoring.

Similarly for the scan workflow, based on design information, thescan planner can feed a scanning plan to the plan achiever, controllingthe robot manipulator to stop and capture 3D images at a series ofdesired poses using the 3D camera mounted on the manipulator.Since the robot platform is aware of both its base's pose relative to thelocal building reference frame through our marker-based metrology,and the 3D camera's pose relative to its base through its internalencoders, the scan register can then readily transform each frame ofcaptured 3D images into the same local building reference frame,resulting a 3D point cloud describing the construction site.

2 http://dfab.arch.ethz.ch/web/e/forschung/198.html.

2.2. Calibration of pose estimator

Before introducing the details of other components of the system, itis important to discuss how the pose estimator is calibrated, since this iscrucial to the level of assembly/scan accuracy that the system canachieve. This process includes two steps to be finished before systemoperation: intrinsic and extrinsic calibration of the camera. Intrinsiccalibration involves estimating the camera's focal length, principlepoint's position on the image plane, and distortion parameters. On theother hand, extrinsic calibration aims to determine: 1) the relative 6-DOF pose of the camera in the robot's base coordinate frame for assem-bly, and 2) the relative 6-DOF pose of the 3D camera in the robot's toolcoordinate frame for scan registration. Itmust be noted that both intrin-sic and extrinsic calibration are one-time processes, as long as the cam-era/3D camera is fixed-focus and rigidly mounted in the robot's base/tool coordinate frame (e.g., fixed installation on the robot's base/end-effector).

2.2.1. Intrinsic parametersUnlike the popular plane-based camera calibration method [38]

implemented in OpenCV3 and Matlab Calibration Toolbox,4 the authorschose to calibrate the camera using a 3D rig, which is similar to theclassic calibration in photogrammetry. This 3D rig wasmade by attachingNApriltags [26] on two intersecting planes forming a 90° angle (as shownin the top of Fig. 2) so that the 3D coordinate X of each Apriltag's centercould be readily measured.

The process of calibration was then simply taking a sequence of Mimages of the rig and inputting them into the author-developed cameracalibration tool,5 which takes advantage of the Apriltag detection algo-rithm todetect the 2D image coordinateU of each tag center and establishcorrespondence with X. Then the initial camera intrinsic and extrinsicparameters can be obtained through Direct Linear Transform (DLT) [18]and subsequently optimized by bundle adjustment [35],whichminimizesthe re-projection error by tuning both intrinsic (K) and extrinsic (R and t)camera parameters as following Eq. (1):

argminK; Ri ;tif g

XMi¼1

XNj¼1

Ui; j−π K RiX j þ ti� �� ��� ��2; ð1Þ

where the perspective division function π : ℝ3 → ℝ2 converts a 2Dhomogeneous coordinate into a 2D Cartesian coordinate.

Benefitting from the high corner detection accuracy of Apriltag aswell as the 3D rig, this intrinsic calibration produces more robust andrepeatable results than the alternatives mentioned above.

2.2.2. Extrinsic parametersOnce the camera's intrinsic parameters are calibrated, the camera's

relative pose in the robot's base coordinate frame, rTc = [rRc, rtc; 0, 1],can be estimated using an extrinsic calibration marker containing L(L N 1) Apriltags with known size and spacing.

As shown in Fig. 3(a), rTc can be composed from the two other poses,mTr, the robot's pose in the extrinsic calibration marker's coordinateframe, and cTm, the marker's pose in the camera coordinate frame, byrTc ¼ cTm

mTrÞ−1�

.

This extrinsic calibration process of the camera used for assemblyconsists of the following steps:

1) Fix the marker in the camera's field of view;2) Manually control the robot manipulator to pinpoint at least 4 non-

collinear points on the marker and record their 3D coordinates rXin the robot's base coordinate frame; also measure their local 3D

http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html.

4 http://www.vision.caltech.edu/bouguetj/calib_doc/.5 Available at https://code.google.com/p/cv2cg/#apriltag.

Page 4: Automation in Constructionlive.engin.umich.edu/wp-content/uploads/sites/430/2019/...Automation and robotics in construction (ARC) has the potential to relieve human workers from repetitive

Fig. 1. System overview.

131C. Feng et al. / Automation in Construction 59 (2015) 128–138

coordinates mX in the marker's reference frame (by setting all Zcoordinates to be zero);

3) Take an image from the camera and detect the L Apriltags' 4Lcorners' 2D image coordinates U.

With this information collected, the mTr can be estimated using thewell-known rigid body registration [4] from 3D point set rX to mX,while the cTm can be estimated by decomposing the homography be-tweenmX andU using the previously calibrated camera intrinsic param-eter K [10,38].

In order to improve the extrinsic calibration's accuracy, a non-linear optimization of cTm is also performed in addition, since duringthe homography decomposition, a polar decomposition is performed toget a valid rotation matrix cRm, which causes the result to be non-optimal. This optimization, as shown in Eq. (2), can be done by tuningthe initial cTm to minimize the re-projection error:

argmincRm;

c tm

X4Lj¼1

U j−π K cRmmX j þ ctmÞÞk2:

����� ð2Þ

While the extrinsic calibration of the 3D camera used for scanningcould be done in a similar manner, a faster and more efficient solutionis to perform the classic robot hand–eye calibration: by moving the

Fig. 2. Intrinsic calibrat

robot end-effector to different poses relative to its base (Bi, Bj) andcorrespondingly estimate through Apriltags the extrinsic calibrationmarker's pose relative to the 3D camera reference frame (Ai, Aj),equationsAiXBi= AjXBj can be establishedwhereX= tTc, representingthe 3D camera's pose relative to the robot's tool coordinate frame, asshown in Fig. 3(b). After rearranging such equations into the form ofAX=XBwhereA=Aj

−1Ai andB=BjBi−1, the calibration can be solved

by Tsai and Lenz's [36] methods. It is worth noting that the previousextrinsic calibration of the camera for assembly could also be solved asa hand–eye calibration problem, even though the camera is not fixedon the manipulator's end-effector, to further simply the calibrationprocess.

2.2.3. Calibration validationThe calibration can be validated by the following procedure:

1) Fix the extrinsic calibration marker to a new pose that is differentfrom the one used in the calibration;

2) Measure, in robot's base frame, the 3D coordinates rX of a set of Pcorner points on the marker (e.g., the 4L corners used previously);

3) Take an image and find out the corresponding 3D coordinates cX inthe camera reference frame using Apriltags;

ion of the camera.

Page 5: Automation in Constructionlive.engin.umich.edu/wp-content/uploads/sites/430/2019/...Automation and robotics in construction (ARC) has the potential to relieve human workers from repetitive

(a) (b)

Fig. 3. Extrinsic calibration.

132 C. Feng et al. / Automation in Construction 59 (2015) 128–138

4) Calculate the residual using Eq. (3):

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1M

XPj¼1

rX j− rRccX j þ rtcÞk2:

����vuut ð3Þ

This residual should ideally approach zero, as smaller residual indi-cates better calibration accuracy.

2.3. Automatic assembly planer

2.3.1. Algorithmic architectural designStartingwith the introduction of Ivan Sutherland's Sketchpad atMIT

in the1963, the architectural exploration of computation has focused onthe digital environment's ability to represent an object as a system“compromised of and working with a series of interrelated systems”[1], a surprising contrast to the discrete geometric representationsfound in many 2D and 3D CAD applications. Initially as a domain ofspecialized research groups embedded in academia or commercialpractices, this systems approach to digital design has become increas-ingly commonplace. In the 1990s many architects, unsatisfied with thecapabilities presented by off-the-self software, began to develop theirown software solutions through both higher-level scripting languagesfor CAD packages and ground-up application development. Currently,visual programming interfaces that afford designers quick access tothe potential of computation without the effort of coding syntax areavailable as plug-ins for popular commercial software, such as Dynamo6

for Autodesk's Revit and Grasshopper7 for McNeel's Rhinoceros.The authors implemented this systems approach (and its respective

tools) to automatically derive the robotic positioning data for eachbuilding block in a curved stack-unit wall from a single user-generatednon-uniform rational basis spline (NURBS) surface. Working from apredetermined block size, an algorithm developed in the Grasshopperplug-in for Rhinoceros extracts latitudinal section curves from theinput surface, generating a running bond pattern of the said blocks.Each block is checked for volumetric collisions with adjacent blocks,color-coding collisions in the Rhinoceros environment. Instead of apply-ing simple heuristics to arbitrarily resolve these collisions, the color-coding can provide real-time feedback, engaging the designer to activelyparticipate in the development of the curved stack-unit wall. Simulta-neously the necessary positional information for each block is outputfor the generation of the assembly plan. Combined into a single algorith-mic process, these functions “enable an explicit and bidirectionaltraversal of the modern division between design and making” [28],

6 http://autodeskvasari.com/dynamo.7 http://www.grasshopper3d.com/.

reinforcing the implications of WilliamMitchell's statement that “archi-tects tend to draw what they can build, and build what they can draw”[25].

2.3.2. Assembly plan generation and simulationGiven thefinal positions and orientations of all the building blocks in

the design, the assembly plan is generated and written into a text filestored for the plan achiever to process later during the online buildingphase.

This assembly plan file contains a list of sequential instructions forthe robot manipulator to build the designed structure. Each line in thefile corresponds to an instruction. For example, the following plan filesegment will instruct the robot manipulator to first grab a buildingcomponent named “block0” directly from above (line 1–4), then lift itvertically up for 500 mm (line 5) and finally place it at its destinationin another reference frame named “building” (line 6–8):

Gripper 0

Goto block0 0 0 500 0 0 0Goto block0-12-10-10 0 0 0Gripper 1Shift 0 0 500 0 0 0Goto building 200.00-300.00 500.00-63.92 0.00 0.00Goto building 200.00-300.00 19.05-63.92 0.00 0.00Gripper 0

Currently, 3 types of instructions are implemented in the system:

1. Gripper 0/1Control the manipulator's gripper to open (0) or close (1);

2 Goto reference_frame x y z a b cControl themanipulator to move to a new pose (x, y, z, a, b, c) in thereference frame, in which the (x, y, z) is the new position and (a, b,c) specifies the new orientation as three Euler angles in “ZYX” order;

3. Shift x y z a b cControl the manipulator to incrementally move by (x, y, z, a, b, c).

This assembly plan can also be simulated in Rhinoceros to check ifthere exists any self-collision between the robot manipulator and thewall during the building process, as shown in Fig. 4.

2.4. Vision-based plan achiever

2.4.1. Rapid setup of building reference frameAs previously mentioned, the reversed spatial relationship of prod-

uct and manipulator on construction sites poses a significant challengefor autonomous mobile robots. This is notably different from typicalautonomous manufacturing spatial configurations, where robots' basesare either stationary or have finite mobility, and materials/components

Page 6: Automation in Constructionlive.engin.umich.edu/wp-content/uploads/sites/430/2019/...Automation and robotics in construction (ARC) has the potential to relieve human workers from repetitive

Fig. 4. Assembly simulation.

133C. Feng et al. / Automation in Construction 59 (2015) 128–138

can be readily staged at fixed locations within the manipulators' staticworkspaces. In contract, for mobile robots to autonomously performbuilding tasks on unstructured construction sites, their bases requiresignificant mobility, and consequently their manipulators' workspacesare not fixed with respect to the construction site. In order to completebuilding tasks at the correct locations and assemble materials into theirintended poses, a robotic system must be able to establish the accurate6-DOF transformation between the robot's base and the building refer-ence frame at all times. As pointed out in [32], this requires the localiza-tion accuracy to be at least at centimeter level, which could not beeasily achieved using state-of-the-art Visual SLAM style techniques formobile robots.

In order to address this challenge, the authors propose a convenientand accurate solution using planar marker-based pose estimation [10,26], as shown in Fig. 5. By 1) attaching fiducial markers at appropriatelocations on-sitewhere building tasks are to be performed, 2) surveyingtheir poses mTb in the building reference frame using a total station, and3) storing these poses inside the system's database, a mobile robot canreadily estimate its base's pose bTr inside the building reference frameusing Eq. (4) whenever its on-board camera detects such a marker,based on previous calibration results:

bTr ¼ rTccTm

mTbÞ−1:�

ð4Þ

2.4.2. Conversion from plans to commandsWith the information input from the pose estimator, the vision-

based plan achiever starts to execute the assembly plan generatedbeforehand, according to the following procedure:

1) Read a single plan step (i.e., one line) from the assembly plan file;2) Wait until all the poses needed to convert this step into a building

command become available;

Fig. 5. Different reference frames.

3) Convert this step into a command that is executable by the robotcontroller;

4) Send the command to the robot controller;5) Wait for the controller to complete the command;6) Repeat this process unless all plan steps are completed, i.e., the plan

is achieved.

It must be noted that the core step of this procedure is the conver-sion from a plan step to an executable command. This is becausethe poses stored in the previously generated assembly plan are notcompletely specified in the robot's base reference frame. Recall thatevery pose in the “Goto” step is specified in a “reference_frame” relative-ly. Specifying all steps in the assembly plan in the robot base referenceframe is not possible because: a) during the design phase the designerconceives all component locations in the building reference frame;b) the robot's base is expected to be mobile during the building phase;and c) more importantly, the building components will be arbitrarilytransported and staged in the building reference frame in the vicinityof the robot manipulator's workspace. This, in fact, is one of the coredifferences between on-site construction automation and manufacturingautomation.

In the authors' approach, this conversion is facilitated by the afore-mentioned rapid setup of the building reference frame using markers.As long as the pose estimator can detect and report the transformationbetween the on-board camera and themarker used to specify the build-ing reference frame, the poses in the plan steps can be readily convertedto the robot base frame using equations similar to (4). Similarly, byattaching markers on the building components, the robot manipulatorcan detect and clasp them autonomously after the corresponding plansteps are converted.

2.5. As-built point clouds from 3D camera

Due to the operation and maintenance requirement, nowadaysconstruction project owners are often more satisfied if given access tonot only an as-designed BIM but also an as-built BIM of a project. Asreviewed previously, a common practice is to start with various 3Dscanners such as TLS to create 3D point clouds of the built environment.These point clouds are then manually or semi-automatically abstractedinto CAD-like 3D geometric objects conveying semantic meanings, suchas walls, floors, doors, windows and utility pipelines.

Sincemore often than not limited by the 3D scanner's field of view, asingle scan cannot fully cover a construction project's outcome,multiplescans need to be carefully planned, performed and then registered into asingle reference frame. As pointed out by previous work, the automa-tion of such planning and registration becomes important to improvethe productivity as well as the quality of the final point clouds.

Facilitated by the proposed visual marker-based metrology and therobot's internal encoders, the process to automatically plan, scan andregister different frames of 3D point clouds becomes readily availableon the proposed robotic platform with minimal effort of mountingand calibrating a 3D camera on the manipulator, when reusing existing

Page 7: Automation in Constructionlive.engin.umich.edu/wp-content/uploads/sites/430/2019/...Automation and robotics in construction (ARC) has the potential to relieve human workers from repetitive

(a) (b) (c) (d)

(e) (g) (h) (f)

Fig. 6. Autonomous robotic assembly experiments.

134 C. Feng et al. / Automation in Construction 59 (2015) 128–138

components for autonomous assembly. Whenever a new frame of 3Dpoint clouds cPi(∀ i=1, 2, 3…) is captured (and thus expressed naturallyin the 3D camera's reference frame), controlled by the scan planner, theyare transformed into the building reference by Eq. (5):

bPi ¼ bTrrTt

tTccPi; ∀i ¼ 1;2;3;…ð Þ ð5Þ

where tTc is the 3D camera's pose relative to the robot's tool referenceframe, i.e., hand–eye calibration result obtained in Section 2.2.2, rTt therobot's tool's pose relative to its base reference frame, maintained bythe robot's internal encoders. As noted earlier, the robot's base pose inthe building reference frame, bTr, is estimated through our marker-based metrology in Eq. (4).

3. Results and discussion

3.1. Assembly experiments

The authors implemented the designed algorithms into a roboticsystem using a 7-axis KUKA KR100 robotic arm with sub-millimeterlevel accuracy, a Point Grey Firefly MV camera, and a laptop with anIntel i7 CPU, connected through the Robot Operating System (ROS).8

Each component in Fig. 1 except for the assembly planner and robotma-nipulator is a process corresponding to a ROS node. The camera nodesends images of size 1280 pixels by 960 pixels to the pose estimatorthat implements the Apriltag detection algorithm in C/C++. The planachiever was implemented in Python. The robot controller was alsodeveloped using Python to send and receive control signals via Ethernetthrough KUKA's native Robot Sensor Interface (RSI). Inside this control-ler, a 6-DOF PID control algorithm was employed to drive the robotmanipulator (with a two-finger gripper) to its destination pose whenexecuting commands from the plan achiever. The involved inversekinematics computations are performed inside the KUKAmanipulator'scontrolling middleware.

8 http://www.ros.org/.

In the first phase of experiments, the robot was tasked with assem-bling a section of a curved wall, as designed in Section 2.3.1. The designwas shown in Fig. 1. The building components used were a set of170 × 70 × 20 mm3 medium-density fiberboard (MDF) blocks, eachaffixedwith two different 56× 56mm2Apriltags. Thebuilding referenceframe was setup by 4 different 276 × 276 mm2 Apriltags. The overallgoal of the experiment was to test the robot's ability to autonomouslybuild the designed wall.

The system was first calibrated and validated using the methodsdiscussed earlier. The validation residual calculated using Eq. (3) wasfound to be less than 1 mm. With the accurately calibrated intrinsic andextrinsic parameters, the online building process proceeded smoothly.The building blocks were affixed with smaller markers, which decreasedthe building block localization accuracy to centimeter level (2 cm) duringthe clasping process. The error was however compensated by the toler-ance of the gripper.

A working cycle of the autonomous building process was shown inFig. 6(a)–(d). Since the pose estimator was constantly monitoringand updating the poses of each marker, the system was naturallycapable of automatically adapting to pose changes on-site. As shown inFig. 6(e)–(h), when a building block's pose changed, the robot manipula-tor was automatically able to pick it up at its newest location. A videorecording of the experiment can be found online at the following URL:http://youtu.be/fj7AXRpj97o. A fully assembled three-layer curved wallapproximately 1.5 m in length, and another three-layer circular wall, areshown in Fig. 7.

3.2. Scanning experiment

The authors implemented the scan module on the same KUKArobotic arm with a Microsoft Kinect as the 3D camera, as shown inFig. 8, with the scan planner and the scan register implemented inMatlab. The Fabrication Lab (FabLab) in the College of Architecture ofthe University of Michigan was scanned as an experiment demonstrat-ing the scan module's effectiveness. Using the methods described inSection 2.5, each frame of newly captured Kinect RGBD image of640 × 480 pixels is first converted into a frame of colored point cloud

Page 8: Automation in Constructionlive.engin.umich.edu/wp-content/uploads/sites/430/2019/...Automation and robotics in construction (ARC) has the potential to relieve human workers from repetitive

Fig. 7. Curved and circular wall assembled on-site by the designed autonomous robotic system.

135C. Feng et al. / Automation in Construction 59 (2015) 128–138

and then transformed into the same building reference frame. Theresulting final point cloud is then visualized in Point Cloud Library(PCL),9 as shown in Fig. 9.

3.3. Limitations and future work

There are several limitations in the above implementation of theproposed vision guided robotic assembly and scanning solution thatneed to be overcome to facilitate its future applications in real worldconstruction sites. These limitations include:

1) Feasibility and robustness of the marker-based pose estimationunder different illumination conditions;

2) Occlusion of markers on construction sites may impede the feasibil-ity or even performance of such systems;

3) Substantial effortmay be necessary to setup such systems onsite dueto requirements to survey numerous markers and register theirposes into the project coordinate frame;

4) Additional effort is needed for attaching markers on each construc-tion material/component such as blocks for the robot to recognizeand pick them up;

5) The durability of markers is a critical limitation for applications inrugged environments and difficult weather conditions;

6) Lack of systematic and quantitative comparisons of such roboticsystem with traditional manual methods.

9 http://pointclouds.org/,.

The first concern on marker-based pose estimation's robustnessnoted above has been addressed by the authors' in another researchstudy with various field experiments proving the feasibility, robustnessand accuracy for applying such technique in both indoor and outdoorconstruction environments [11].

Even though the second concern about occlusion is a common limi-tation of all vision basedmethods, it is not such a critical problem in thisproposed method. This is because many occlusions of markers onsiteare temporary due to moving equipment or humans. Once the mobilerobotic manipulator observes those markers and estimates its ownpose in the project reference frame, as long as its base stays static duringthe occlusion period, the robot can still accurately maintain its poseusing its internal encoders.

The third concern about the workload of installing and surveyingmarkers onsite indeed exists in the current system implementation.Yet, compared to the efforts of setting up other pose estimation andlocalization methods that require powered hardware infrastructuresuch asWLAN or UWB, ormethods that onlywork well in open-air out-door environments such as GPS, this effort could be a worthwhiletradeoff. Moreover, the authors are working on a novel method that in-tegrates the markers with Structure-from-Motion technique so thatthere will be no need to survey these markers using specialized survey-ing equipment such as total station. Instead, by simply taking a set ofimages of those installed markers, their poses can be automaticallyestimated.

The authors also recognized the fourth and fifth concerns, andworked on removing the requirement of attachingmarkers on raw con-struction materials. For example, although not closely related to the

Page 9: Automation in Constructionlive.engin.umich.edu/wp-content/uploads/sites/430/2019/...Automation and robotics in construction (ARC) has the potential to relieve human workers from repetitive

136 C. Feng et al. / Automation in Construction 59 (2015) 128–138

core contribution of this paper, the block detection and grasp can beachieved without markers using Kinect sensor and the author's newlydeveloped fast plane extraction algorithm [12], as shown in Fig. 10and demonstrated at the following URL: http://youtu.be/CyX4Pr_xly0.For markers to be attached on indoor construction sites, even printingon papers can achieve reasonably good durability, which is very cost ef-fective and can be efficiently and conveniently replaced if destroyed. Foroutdoor situations, markers can be spray painted on existing planar sur-faces (e.g., walls) or manufactured wood, foam, or plastic boards.

Lastly, quantitative comparisons with tradition manual methodsare a logical future direction in this research. Such comparisons willbe more meaningful when this research advances to the next stagewhere the mobile manipulator can actually build stable structureswith cement mortar, rather than in this research as the proof-of-concept stage.

4. Conclusions

This paper reported algorithms and an implemented roboticsystem that is able to automatically generate assembly plans fromcomputational architectural designs, achieve these plans autono-mously on construction sites, and create as-built 3D point clouds. Inorder to address the localization accuracy challenge, the authors proposeda computer-vision-based sub-centimeter-level metrology that enables

Fig. 8. Scan module of the d

pose estimation using planar markers. The conducted evaluation experi-ments used the designed robotic system to autonomously assemblevarious structures such as a curved wall of MDF blocks, proving the algo-rithms and the system's ability to meet the accuracy requirement whenbuilding computational architecture designs. In sum, with markers, anunstructured construction site can be rapidly configured to allow autono-mous mobile manipulators to localize themselves and thus performassembly and scanning tasks. In this way, the challenges of unstructuredenvironment and mobility can be efficiently addressed.

In addition to the long term future goals mentioned in Section 3.3,the authors' current and planned work in this research direction in theshort term is focused on continuously improving the fundamentalmethods used in this system along the following directions:

1) Perception: using camera on manipulator to further improve thepose estimation accuracy through multiple observations and sensorfusion; improving 3D perception to help improve the safety of therobotic system, preparing for its collaboration with human workerson-site.

2) Navigation: incorporating state-of-the-art SLAMalgorithmswith theproposed marker-based metrology in the system for increasing itsrange of autonomous movement.

3) Hardware and control: designing a suitable robotics platform withmobile base and on-board manipulator of sufficient payload capacityfor indoor and outdoor construction activities that are more complex

esigned robotic system.

Page 10: Automation in Constructionlive.engin.umich.edu/wp-content/uploads/sites/430/2019/...Automation and robotics in construction (ARC) has the potential to relieve human workers from repetitive

Fig. 9. An as-built 3D point cloud of the FabLab at the University of Michigan.

137C. Feng et al. / Automation in Construction 59 (2015) 128–138

than the block laying activity described in this paper, exploring moresophisticated control algorithms to enable such complex constructiontasks.

4) Applications: extending the proposed autonomous system in otherconstruction or architecture applications.

Fig. 10. Block detection and grasp using Kinect.

References

[1] S. Ahlquist, A. Menges, Introduction: Computational Design Thinking, in: A. Menges,S. Ahlquist (Eds.), Computational Design Thinking, John Wiley & Sons Ltd., WestSussex 2011, pp. 10–29.

[2] M. Akula, R. Lipman,M. Franaszek, K. Saidi, G. Cheok, V. Kamat, Real-time drill monitor-ing and control using building information models augmented with 3D imaging data,Autom. Constr. 36 (2013) 1–15.

[3] C. Balaguer, Nowadays trends in robotics and automation in construction industry:transition from hard to soft robotics, Proceedings of International Symposium onAutomation and Robotics in Construction, Jeju, Korea, 2004.

[4] P.J. Besl, N.D. McKay, A method for registration of 3-D shapes, IEEE Trans. PatternAnal. Mach. Intell. 14 (2) (1992) 239–256.

[5] I. Brilakis, M. Lourakis, R. Sacks, S. Savarese, S. Christodoulou, J. Teizer, A. Makhmalbaf,Toward automated generation of parametric BIMs based on hybrid video and laserscanning data, Adv. Eng. Inform. 24 (4) (2010) 456–465.

[6] Bureau of Labor Statistics, Census of Fatal Occupational InjuriesRetrieved 06 17,2012, from http://www.bls.gov/iif/oshcfoi1.htm#2012 2012.

[7] J. Chung, S. Lee, B.-J. Yi, W. Kim, Implementation of a foldable 3-DOF masterdevice to a glass window panel fitting task, Autom. Constr. 19 (7) (2010)855–866.

[8] K. Dierichs, T. Schwinn, A. Menges, Robotic Pouring of Aggregate Structures, in: S.Bell-Cokcan, J. Braumann (Eds.), Robotic Fabrication in Architecture, Art, and Design,SpringerWienNewYork, New York 2012, pp. 196–205.

[9] A. Dubor, G.-B. Diaz, Magnetic Architecture, in: S. Brell-Cokcan, J. Braumann (Eds.),Robotic Fabrication in Architecture, Art, and Design, SpringerWienNewYork, NewYork 2012, pp. 206–213.

[10] C. Feng, V.R. Kamat, Plane registration leveraged by global constraints forcontext-aware AEC applications, Comput.-Aided Civ. Infrastruct. Eng. 28 (5)(2012) 325–343.

[11] C. Feng, S. Dong, K.M. Lundeen, Y. Xiao, V.R. Kamat, Vision-based articulatedmachine pose estimation for excavation monitoring and guidance, Proceedings ofthe 32nd International Symposium on Automation and Robotics in Construction,Oulu, Finland, 2015.

Page 11: Automation in Constructionlive.engin.umich.edu/wp-content/uploads/sites/430/2019/...Automation and robotics in construction (ARC) has the potential to relieve human workers from repetitive

138 C. Feng et al. / Automation in Construction 59 (2015) 128–138

[12] C. Feng, Y. Taguchi, V.R. Kamat, Fast plane extraction in organized point clouds usingagglomerative hierarchical clustering, Proceedings of the IEEE International Conferenceon Robotics and Automation, Hong Kong, 2014.

[13] T. Fukuda, Y. Fujisawa, F. Arai, H. Muro, K. Hoshino, K. Miyazaki,…, K. Uehara, A newrobotic manipulator in construction based on man–robot cooperation work, Proc. ofthe 8th International Symposium on Automation and Robotics in Construction 1991,pp. 239–245.

[14] E. Gambao, C. Balaguer, F. Gebhart, Robot assembly system for computer-integratedconstruction, Autom. Constr. 9 (5) (2000) 479–487.

[15] E. Gambao, M. Hernando, D. Surdilovic, A new generation of collaborative robots formaterial handling, Gerontechnology 11 (2) (2012) 368.

[16] S. Han, H. Cho, S. Kim, J. Jung, J. Heo, Automated and efficient method for extractionof tunnel cross sections using terrestrial laser scanned data, J. Comput. Civ. Eng. 27(3) (2012) 274–281.

[17] A. Hansson, M. Servin, Semi-autonomous shared control of large-scale manipulatorarms, Control. Eng. Pract. 18 (9) (2010) 1069–1076.

[18] R. Hartley, A. Zisserman, Multiple View Geometry in Computer Vision, CambridgeUniversity Press, 2000.

[19] V. Helm, S. Ercan, F. Gramazio, M. Kohler, In-situ robotic construction: extending thedigital fabrication chain in architecture, Proceedings of the 32nd Annual Conferenceof the Association for Computer Aided Design in Architecture, San Francisco 2012,pp. 169–176.

[20] B. Kahane, Y. Rosenfeld, Real-time “sense-and-act” operation for constructionrobots, Autom. Constr. 13 (6) (2004) 751–764.

[21] Y.-S. Kim, C.T. Haas, A model for automation of infrastructure maintenance usingrepresentational forms, Autom. Constr. 10 (1) (2000) 57–68.

[22] R. Kümmerle, B. Steder, C. Dornhege, M. Ruhnke, G. Grisetti, C. Stachniss, A. Kleiner,On measuring the accuracy of SLAM algorithms, Auton. Robot. 27 (4) (2009)387–407.

[23] C. Milberg, I. Tommelein, Role of tolerances and process capability data in productand process design integration, Proceedings of Construction Research Congress, 8,2003.

[24] C. Milberg, I.D. Tommelein, Application of tolerance mapping in AEC systems,Proceedings of Construction Research Congress, 2005.

[25] W.J. Mitchell, Roll Over Euclid: How Frank Gehry Designs and Builds, in: Frank Gehry(Ed.), Architect (354), The Solomon R. Guggenheim Foundation, New York, 2001.

[26] E. Olson, AprilTag: a robust and flexible visual fiducial system, Proceedings of the 2011IEEE International Conference on Robotics and Automation 2011, pp. 3400–3407.

[27] G. Paul, D. Liu, N. Kirchner, S. Webb, Safe and efficient autonomous explorationtechnique for 3D mapping of a complex bridge maintenance environment, Proc.24th International Symposium on Automation and Robotics in Construction 2007,pp. 99–104.

[28] D. Pigram, I. Maxwell, W. McGee, B. Hagenhofer-Daniell, L. Vasey, Protocols, pathways,and production, in: S. Brell-Cokcan, J. Braumann (Eds.), Robotic Fabrication in Architec-ture, Art, and Design, SpringerWienNewYork, New York 2012, pp. 143–147.

[29] G. Pritschow, M. Dalacker, J. Kurz, Configurable control system of a mobile robot foron-site construction of masonry, Proceedings of the International Symposium onAutomation and Robotics for Construction and Mining, 1993.

[30] E. Rojas, P. Aramvareekul, Is construction labor productivity really declining? J. Constr.Eng. Manag. 129 (1) (2003) 41–46.

[31] K. Saidi, J. O'Brien, A. Lytle, Robotics in construction, Springer Handbook of Robotics,Springer, Berlin Heidelberg 2008, pp. 1079–1099.

[32] I.M. Shohet, Y. Rosenfeld, Robotic mapping of building interior—precision analysis,Autom. Constr. 7 (1) (1997) 1–12.

[33] A. Slocum, B. Schena, Blockbot: a robot to automate construction of cement blockwalls, Robot. Auton. Syst. 4 (2) (1988) 111–129.

[34] H. Son, C. Kim, C. Kim, Fully automated as-built 3D pipeline extraction method fromlaser-scanned data based on curvature computation, J. Comput. Civ. Eng. 29 (4)(2014) B4014003.

[35] B. Triggs, P.F. McLauchlan, R.I. Hartley, A.W. Fitzgibbon, Bundle adjustment—a modernsynthesis, Vision Algorithms: Theory and Practice, Springer 2000, pp. 298–372.

[36] R. Tsai, R. Lenz, A new technique for fully autonomous and efficient 3D roboticshand/eye calibration, IEEE Trans. Robot. Autom. 5 (3) (1989) 345–358.

[37] S.-N. Yu, B.-G. Ryu, S.-J. Lim, C.-J. Kim, M.-K. Kang, C.-S. Han, Feasibility verification ofbrick-laying robot using manipulation trajectory and the laying pattern optimization,Autom. Constr. 18 (5) (2009) 644–655.

[38] Z. Zhang, A flexible new technique for camera calibration, IEEE Trans. Pattern Anal.Mach. Intell. 22 (11) (2000) 1330–1334.

[39] Z. Zhu, S. Donia, Potentials of RGB-D cameras in as-built indoor environmentsmodeling, Proceedings of the ASCE International Workshop on Computing in CivilEngineering, Los Angeles, CA 2013, pp. 23–25.