Peek at AR/VR & Machine Learning In Construction
Augmented Reality, Virtual Reality, Machine Learning these days we hear very often. With the ML there is tons of stuff you can do like object detection, pose detection, text detection, all kinds of shape detection. AR/VR technology allows you to bring the objects you modeled into real word.
Now, if you’re not excited at this point — check your pulse, because you’re probably dead. #MatthewHallberg
So, what can we develop with these technologies? How to start developing? Since I couldn’t find a good book with best practices in AR/VR or ML, I decided to start building stuff and keep building it and, hopefully, I will find some similarity eventually.
Hardware. Devices
AR/VR is almost all about software you’re using and what is compatible. With that said, let’s take a look at some hardware that is being used the most at this point before we jump into ideas that overview what can be accomplished.
Augmented Reality Devices can be head-mounted or hand-held. A handheld device can be your phone or tablet capable of running ARKit (iOS) or ARCore (Android). It get a little more sophisticated when we are talking about head-mounted devices.
- Oculuc
- Microsoft Hololens 2
- DAQRI Smart Helmet
- HTC Vive pros
- Magic leap
What can be beneficial to develop
A couple of weeks ago I joined a brainstorm about how to get started developing applications using AR/VR/ML. We were focusing on applications specifically for design and visualization for the construction industry. With that said, here is a quick recap with some ideas prototypes we’ve come up with.
Idea 1. On-site Installation analysis
What it does: Installation quality and quantity control.
How it works: This is an image processing mobile AR app that uses machine learning technology. Users will walk through the site and the app will be identifying specific items (electrical outlets and panels, certain types of walls, etc.) that were installed on site. The app has to track position of the user based on the floor plan, so the app always knows where the identified item was installed.
Description: The idea is to use synthetic data — we will have a computer to generate and label all the images in Unity. Then these images will be passed to Tensor Flow to train the model. We are going to use OpenCV to run inference on the model. There is a good step-by-step tutorial here. Also, check Jameson Toole’s article about Coke cans detection.
Idea 2. VR model
What it does: This app brings users to virtual reality so they can see what the area has to look like after all the items installed (ex.: see this link to experience VR).
How it works: This is a mobile app. In VR users can see what has to be installed in this particular room or area. In order to locate users and pull out the correct overlay each room has to have a QR code that users have to scan. 3D software (Autodesk Navisworks or Revit) has to have a plugin that takes pictures of a model and exports images into a cloud. In the cloud the images get stich together. The cloud returns a single panorama made out of the images.
Description: This OpenCV and some Python coding we can detect keypoints and match them between images.
Idea 3. On-site assembly order
What it does: Engineers build assemblies they need based on the on-site conditions.
How it works: On-site team members can put together a list of assemblies they need based on the construction condition and transfer the list directly to a shop supervisor. The app will be working similar to Housecraft or IKEA augmented reality apps. Manifolds, electrical boxes, panels, mechanical grills, louvers, plumbing fixtures, hangers — the app automatically scales products, based on room dimensions. In order to visualize a product within a space, the application scans the expanse of a room through a mobile device camera. Users can browse through the product database, to make their selections. Once chosen, users must point the device to the desired spot in a room, then drag and drop the selected product onto the space.
Description: ARCore and ARKit do surface detection and tracking very well. With that we can place an object on a floor or wall and look at it as we move around. Unity has an AR Foundation package which interacts with ARCore and ARKit. The 3D models for the app can be developed in Revit, but Revit can’t export models into *.fbx, so it’s easier to use Blender for modeling.
Idea 4. Pull objects out of computer screen
What it does: Allows users to pull the drawings and 3D models out of a computer screen.
How it works: This is an Oculuc app which mimics the CMU project. It is a design exploration that aims to explore multi-device hand-gestural interactions in augmented reality. Users will be dragging content from a computer screen to AR.
Description: The developers prototyped the concepts with an Oculus Rift VR headset and Zed Mini camera as a passthrough AR solution and coupled it with Leap Motion for hand tracking. See more here.
Thank you for reading! Let me know what you think.