Uncategorized

AI를 사용한 Progress Monitoring Technology

미국의 어바나 샴페인 대학교 Mani Golparvar교수가 만든 Reconstruct를 소개합니다. 이미지 프로세싱 등 컴퓨터 비젼을 연구하던 연구실에서 건설관리 플랫폼을 개발하면서 시작한 회사입니다. 인공지능을 적극적으로 활용할수 있는 플랫폼이라고 소개하고 있는데요, 궁극적인 목표는 모니터링을 완전히 자동화 하여 컨트랙터가 짓는 업무에만 완전이 몰입할 수 있도록 하는것이 목표라고 합니다.

공정계획은 어디까지나 계획이므로, 계속 현장상황을 반영하여 방향을 수정하고 업데이트된 계획을 새롭게 수립해야 하지요. 그러나 혼잡하고 빠르게 변화되는 건설 현장을 모니터링 하는 것은 대단히 어려운 문제입니다. 모니터링을 자동화하여 진행률을 계산하고, 새롭게 공사 목표를 정해주는 것이 가능하려면, 지속적으로 현장 이미지를 얻고, BIM을 중심으로 현황정보(As-Built)와 계획정보(As-planed)가 정렬되고, 스케줄 정보가 변경되어야 합니다.

Reconstruct는 드론과 작업자 헬멧에 설치하는 카메라, 그리고 타워크레인에 설치하는 크레인 카메라 및 3D 스캐너를 사용해 주기적으로 현장 상황을 이미지로 수집하고, 3D model에 이 정보를 통합하여 관리합니다.

아주 흥미로운 접근방법이며, 향후 많은 발전이 기대되는 기술입니다. 아래 reconstruct 블로그에서 소개하고 있는 내용을 공유합니다.

https://medium.com/reconstruct-inc/what-ai-can-do-for-progress-monitoring-b47526fece5f

Towards this end, we’ve built a prototype ground robot for indoor 3D  mapping (see video above). The robot can learn to cover a designated path while avoiding people and other obstacles and taking a 360 video. It works well, but can be foiled by a two-by-four left blocking the hallway, so for now our indoor solution uses progress photos and videos captured by humans with Reconstruct Field.

For areas that are hard to access, we’ve also designed a way for a drone to autonomously place a camera onto a steel surface, such as a wide flange beam, as shown in the video above. The drone picks up and drops off the camera using a cam mechanism, similar to the clicker of a ball-point pen. The camera sticks to the steel surface using an attached magnet. This could be a good way to put wireless low power time-lapse cameras all over the construction site for continuous capture.

Aligning Reality to Plans

The next step in progress monitoring is to align photos and point clouds to building information models (BIM) that specify the geometry and material of each element to be constructed.

Tag designed for extremely fast and accurate detection for aligning reality capture to as-built.

One way to precisely align is to place markers, or tags, around the site. We’ve designed “ChromaTag” (above) that can be detected more accurately and 10x to 50x faster than existing markers. The key is to encode the pattern for detection and identification within the “A” and “B” channels of the CIE L*a*b color space. Because the colors aren’t often seen next to each other in natural scenes, the tag can be detected very quickly, while detectors for colorless tags have to spend a lot of time sorting through the more common high-contrast regular regions. We also designed the tag detection system to look only for differences, making it robust to variations in lighting and printing. As a result, the tag can be detected at 925 frames per second (30x real time!) with virtually no misidentified tags. Using these markers, we can align photos captured indoors to building plans, even in repeated structures such as in hotels, with detection algorithms built into smartphones and cameras to ensure completeness during capture.

Identifying Construction Elements

Finally, with reality captured and aligned to plans, we need to determine which elements have actually been put in place.

Above, we diagram an overview of our approach to measure progress. When elements are fully visible, it is easy to compare the geometry from our reality model to the BIM specifications. The challenges are that sometimes elements such as multilayer walls are constructed over time with varying materials and that many elements are not fully visible in the reality capture.

Our algorithm identifies materials using deep learning with images and 3D points.

To address the challenge of layered elements, we created an algorithm that learns to identify materials based on image features and 3D surfaces produced by photogrammetry (above). The image features are created using deep learning, or convolutional neural networks, a technique that has recently revolutionized pattern recognition. Materials are sometimes difficult to identify based on either appearance or texture, but we find that using both together provides significant improvements, reducing error by about 15%.

To determine if a BIM element is in place, we can analyze its similarity to the 3D points and backproject its surface to images, as shown above. Then, we can classify whether the shape and material of each element is present in the as-built model. We also analyze constraints in the schedule and physical connectivity of elements to help infer whether occluded elements have been put in place.

Our algorithm automatically color-codes a BIM to show which planned elements are completed (green) or missing (red).

When will automated progress monitoring be ready for the field?

We clearly have all the pieces — autonomous data capture, markers for alignment, material and shape recognition, analysis of schedule and structural constraints, and a plan to put it all together. But as leaders in technology, we understand the limitations, as well as the potential, of what is possible with today’s AI technology. It will still be a few years before technical and regulatory hurdles are sufficiently overcome for daredevil drones and roving robots to automatically update a construction manager’s progress dashboard in real-time. Meanwhile, at Reconstruct, we are committed to streamlining the process by gradually increasing automation and integrating non-automated pieces into standard workflows such as field reports. So, using today’s AI technology, Reconstruct already makes it easy for owners, managers, and sub-contractors to stay up-to-date on progress.

References

Below are some of the technical papers that describe the original research discussed here. Golparvar-Fard is CEO, Hoiem is CTO, and Bretl, DeGol, Han, and Lin are co-founders of Reconstruct.

A Passive Mechanism for Relocating Payloads with a Quadrotor. J. DeGol, D. Hanley, N. Aghasadeghi, T. Bretl. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2015.

ChromaTag: A Colored Marker and Fast Detection Algorithm. J. DeGol, T. Bretl, and D. Hoiem. IEEE International Conference on Computer Vision (ICCV). 2017.

Integrated sequential as-built and as-planned representation with D4AR tools in support of decision-making tasks in the AEC/FM industry. M. Golparvar-Fard, F. Peña-Mora, F., and S. Savarese. Journal of Construction Engineering and Management. 2011.

Appearance-based Material Classification for Monitoring of Operation-level Construction Progress using 4D BIM and Site Photologs. K. Han and M. Golparvar-Fard. Automation in Construction2015.

Geometry-Informed Material Recognition. J. DeGol, M. Golparvar-Fard, D. Hoiem, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016.

D4AR–a 4-dimensional augmented reality model for automating construction progress monitoring data collection, processing and communication. M. Golparvar-Fard, F. Peña-Mora, and S. Savarese. Journal of information technology in construction. 2009.

Visualization of construction progress monitoring with 4D simulation model overlaid on time-lapsed photographs. M. Golparvar-Fard, F. Peña-Mora, C.A. Arboleda, and S. Lee. Journal of Computing in Civil Engineering. 2009.

Automated progress monitoring using unordered daily construction photographs and IFC-based building information models. M. Golparvar-Fard, F. Peña-Mora, and S. Savarese. Journal of Computing in Civil Engineering, 2012.

Formalized knowledge of construction sequencing for visual monitoring of work-in-progress via incomplete point clouds and low-LoD 4D BIMs. K. Han, D. Cline, and M. Golparvar-Fard. Advanced Engineering Informatics. 2015.

Leave a Reply

Your email address will not be published. Required fields are marked *