This post shall remain the latest until for a long time, it outlines the latest development done on CompuRay so far. We call this version as V3.
Project Objective:
Just to remind, our project objective in brief, is to augment the experience involved during troubleshooting or upgrading of a traditional desktop computer. This project makes use of Microsoft HoloLens, the AR device recognizes various components of the computer and the results are spatial mapped to the real-world through HoloLens. The HoloLens was trained to recognize the following components of a computer cabinet: CPU, Drive Bay, SMPS, GPU, RAM and PCI Slots. The following equipment and software were used: Microsoft HoloLens, Unity and Microsoft HoloToolKit. The interface provided for a user to interact with HoloLens is through voice commands, except during the phase of analysis. The advantage of voice commands over traditional HoloLens recognizable gestures includes handsfree control along with a more natural UI and almost a zero-learning curve in order to operate the device.
Success Metric:
- Object Detection: The project successfully identifies different components within a specified boundary and the tags are stably anchored in real world. The predictions seen on Azure Dashboard are accurate.
- Natural UI: The project provides the user with a very simple UI that is through voice commands which has almost zero learning curve along with an only one gesture-based interaction step. The advantage of having voice-based commands allows the user to be totally involved in the work, with hands being free for other tasks.
- Spatial Mapping:The project successfully manager to anchor tags to right objects in almost all cases, a few exception cases might exist based on the lighting conditions.
Future Work:
This project makes use of Universal Windows Platform, provided by Microsoft. During various tests that were conducted, HoloLens at times was unable to detect the right depth and thus tags were anchored slightly offset from the actual component’s position. This provides an opportunity to dive deep into depth sensing implementation in order to deconstruct and improve the depth sensing of HoloLens for it to scan better inside a desktop computer. As this project is using the default SpatialMapping script provided by Microsoft, modifications were required in order to have gesture-based cursor interaction with a hologram menu, however, it was hard to make modifications to it and any modification led to breaking of the entire app, if project had some more time that would allow us to make right modifications allowing the project to have an even more user-friendly UI with multiple options to customize the app such as to reset the app, in order to reset the scene and clear all the tags anchored previously. As we faced difficulty in adding 3D objects to the scene due SpatialMapping script configuration, a modification to it would also help us add interactable objects to the scene based on the tags found.
Screenshots:






Video Demo: CompuRay Demo
Please feel free to leave a feedback, thank you!


