This is a two-part article. First is a general overview of my thoughts on HoloLens after seeing it at Build. The second part provides more detail regarding my experience with the device itself.
In January 2015, Microsoft announced HoloLens but provided little information. At this year’s Build event in April, HoloLens was one of the stars and Microsoft gave a lucky few the chance to see and experience it up close. If you missed the keynote, I suggest watching it. This video excerpt specifically shows the portion at the end related to HoloLens.
Before I continue, I would like to share a bit about myself, and what I base my opinions on. I have been a developer for over 20 years, and when I attend events like Build, I am foremost thinking like one. Programming in ingrained in me, but I am also a gamer and a kid at heart. Sometimes that part of me pushes logic aside and I get excited for the newness and potential of things.
So I experienced HoloLens at Build wearing those different hats, but I also wear the hat of one of the principal software engineers for Skylight. Since I already have experience working with similar devices and with customers that use our product, I was able to consider how this newcomer could fit into our platform.
The Holographic Platform
The key to all of this is what Microsoft is calling their Holographic Platform. From everything I saw and heard, HoloLens is really secondary to the platform. Indeed, they seemed to stress that HoloLens, at its core, is just another device running Windows 10. I don’t say this to diminish the device itself, but from what I gather, Microsoft hopes to allow other vendors to build devices that can leverage the holographic platform.
The Holographic platform is basically a set of APIs and an SDK that allow you to create holographic applications that can be visualized by devices, such as HoloLens. To start with, any Windows 10 Universal Application can be a hologram and therefore used by HoloLens.
The image above is from the keynote and represents a view through HoloLens (sort of, which I will address later in this article). The objects on the wall are actually Windows Universal Applications that have been docked to the wall. To really get a more in depth experience from HoloLens though, you need to leverage the Holographic platform to create holograms. The Maui weather object (on the table by the chair) and the small robot (on the foot stool) are examples of this.
All questions related to hardware specifications were turned away. They are being very close-mouthed about it, which I can understand since it’s still a prototype. The specs will most likely change before the final model is released. You can, however, glean some information from Microsoft’s hardware information page.
The devices we were provided were not some 3D-printed, hacked devices, but production quality devices and very sturdy. I will go into more detailed about my experiences during the academy in a separate article.
The one thing they did mention, specifically, is that HoloLens has a built in GPU and a Holographic Processing Unit (HPU). The HPU is a custom piece of hardware created by Microsoft specifically for processing holograms. My best guess is that this chip is similar to the GPU but also handles processing the stream of sensor data generated by the device, as well as managing the holograms and their location.
Many articles about Hololens start out showing excitement but come out somewhat negative in the end. Most of the negativity is related to the limited field of view in which content can be overlaid on top of the user’s vision. Take another look at the HoloLens portion of the keynote. We could see all around where “Darren” sat and could see multiple holograms. However, from my experience, Darren himself would have only been seeing a fraction of this because of the limited field-of-view provided by the device. What we were seeing was made possible by the device that was being used by the camera, which I figure had a much larger field of view.
While this issue is limiting, I don’t see it as a deal breaker for the device. First, it’s important to note that this is a first generation device, and will only get better. There are still lots of compelling content that can be built given this constraint. Second, no other device provides this type of functionality. I do not include the Oculus Rift here because our product, Skylight, and our customers’ use cases focus on the use of enhancing the reality of Skylight users rather than virtualizing the experience. Furthermore, there are things that can be done to assist in “finding” holograms that may be currently out of the field of view. We already have the concept of beacons in Skylight (a way to direct the user to an object that is currently not in their field of view), which could be updated and leveraged for this situation.
Also, HoloLens and the Holographic platform provide us with another ingenious way of finding objects: sound. Imagine an object currently out of the user’s FOV emitting a sound. Since the sound is 3-dimensional, it can be followed in order to locate objects.
Overall, I was impressed with HoloLens from a professional and personal stand point. This technology is really on the cusp right now and I, for one, am very excited to see what happens next.