Augmented reality (AR) is an intriguing technology that has the potential to transform how we live and interact with the world around us. However, developing a decent visual integration of digital and real-world elements is quite difficult. Unfortunately, due to the necessity of accuracy in engineering, this makes AR applications in that industry particularly difficult.
Why is Augmented Reality accuracy so difficult?
In order for a tablet display to present virtual items in the correct area, the AR app must be able to track the tablet’s position in real time. That problem is difficult to answer precisely on a handheld device because it may necessitate complex calculations and several sensors, which are difficult to deliver in a device constrained by CPU power, battery capacity, and size…
Early systems used markers (QR codes) to perform tracking to tackle this problem. The AR programme determines the tablet’s position based on the form and orientation of the marker captured on the video captured by the tablet’s camera. Markers work well, but they must be visible in the camera view to be of any value, so you may need to instal and survey a lot of them.
Later, more improved computer vision techniques made it possible to track without using markers. These are based on visual cues like edges and corners. They save you from having to instal markers, but because all of these techniques are based on video, factors like bad lighting or low contrast might cause augmentations to appear “shaky” or to be presented in the incorrect spot. Despite recent advances, science has not yet addressed the camera tracking problem in a way that allows for “anywhere” augmentation.
Is there any better way?
Knowing where the camera is located is useful for applications other than augmented reality. Consider technology like Bentley Systems’ ContextCapture technology, which transforms photographs into meshes.
To create such models from photographs, the ContextCapture technique must first go through a stage called “aerotriangulation,” which matches photo attributes and properly determines each photo’s relative location and orientation. The procedure is performed offline and can take many minutes to complete, but it yields extremely accurate estimations of… the camera position! So we reasoned, why couldn’t we utilise those location estimates to supplement the photos?
To put our hypothesis to the test, we did the following:
During the construction of an expansion to the second storey, we flew a drone around our local Bentley office, gathering video.
We utilised our ContextCapture technology to construct a 3D mesh of the scene by extracting all of the frames from the video.
The generated mesh was then matched with a BIM model of our building.
Finally, we augmented each frame with the BIM model using the calculated locations and orientations.
The results reveal a highly consistent and well-aligned augmentation from frame to frame:
Change detection with augmented reality
Of course, this isn’t quite augmented reality because it isn’t real-time. Because of the AT process, the augmentation took many minutes to compute. On the other hand, it resulted in a pretty consistent increase.
Despite the fact that it is not in use, this technology may be highly valuable on a constructionc site. Consider daily site inspections to look for delays or faults in the construction process. The accurate overlay of the BIM model with reality would allow for easy localisation of inconsistencies and identification of errors.
Here’s how it might work. A drone might fly around the site on a regular basis, capturing photographs and sending them to the cloud, where the 3D mesh would be produced and aligned with the BIM model in real time. The photographs would then be automatically enhanced with the BIM model and made accessible for download in a matter of minutes.
From afar, a user may study the augmented photographs and quickly spot delays or problems in the construction process. This method would not only save him or her multiple visits to the site, but it also has additional advantages. It would also allow for more frequent monitoring, the augmentation would be far more consistent and accurate than handheld augmentation on site, and it would be available from a variety of perspective points that would be impossible to reach when wandering around the site using live AR on a tablet…
Large infrastructure projects would benefit greatly from such a solution, at least in terms of the asset’s outside shell. Drones outfitted with range sensing technology are increasingly commonplace, allowing them to fly within buildings, “seeing” and avoiding objects, and navigating by producing a map. I’m sure you can guess what’s in store.
For more information, visit: https://bariway.com/