>
EventsPartners

My Holographic Academy Experience

By May 27, 2015 No Comments

This article is a detailed recounting of my experiences developing Holographic applications for HoloLens at MS Build this year. For a more general overview of my experience with HoloLens, please read my previous article.

I feel privileged that I was able to attend the limited-access Holographic Academy that Microsoft offered during Build. This was approximately 4.5 hours of hands-on use of HoloLens and development on the Holographic Platform. While the scenario was somewhat contrived, and we didn’t get into anything too deep, it was nonetheless exciting and quite impressive.

Academy Introduction

First, we had to lock away all devices. There were plenty of cameras around but they were all Microsoft’s. I understood their decision to keep a tight leash on media related to the event, but I would have loved to include pictures (as some of this will be difficult to describe). It was, however, a little distracting when they put a GoPro right beside me. While waiting in line to get in, they measured our interpupillary distance (IPD). The IPD is the distance between the centers of the pupils in each eye and this value is used by HoloLens in order to ensure that the holographic images are seen correctly.

We were all (about 50 of us in attendance) paired up and assigned a Microsoft guide, whose job was to answer questions and help us as needed. The Academy started with a brief introduction and some slides covering a little background information (what a Hologram is, the holographic platform, as well as a little bit about the HoloLens itself). Important to note here are the three main inputs used by HoloLens and available as part of the holographic platform: gaze, gesture, and voice.

After that brief introduction they brought out the HoloLens devices for all of us and we connected them to the computer. Once connected, we were instructed to open a web browser and access a web server running on the device. We used this website to enter our previously measured IPD, which configured the device for our use. Next, we learned about how to put on the device.

hl2

HoloLens is divided into two main parts. The outer portion is similar to a large pair of sunglasses that wrap around the back of the head a little. The center portion is a ring that is similar the band of a baseball cap (but quite a bit more rigid). This inner ring has a dial on it that will allow it to expand and contract to fit your head. The outer portion can slide away from the inner ring and pivot. This allows you to put the ring on your head and fit it rather easily, after which it is just a matter of sliding the glasses down and back in a way that is comfortable for you. I wear glasses and had no problems fitting the device to myself, and I only had to do it once.

Once fitted with the device, we were presented with an initial demo and basic instructions for how to interact with it. The demo consisted of a small RC truck. I was able to place the truck on a real-world surface (such as the floor or table in front of me). I was also able to make a gesture with my finger to place a small flag that the truck would immediately drive to. I could keep placing flags and it would continue moving between them.

My initial reaction was to smile. It was quite an interesting experience. The truck and flags were nice and bright and looked good in the real world. If the truck (a virtual object) encountered a real object in its path that it could not drive over – like the couch – it would turn from it or flip over. It did, however, drive over my foot just fine. I was impressed with both the accuracy and speed of gesture recognition, and also with how well the device was able to map, and continue to map, the real world. Even though I can’t yet see a use for this, it was very eye-catching.

Hands-On Development

After the introductory demo I was ready to see more, and actually to get into some code. The plan for the rest of the event was to build out a simple demo. The focus of the demo was a very basic 3-D model of a large pad of paper with two sets of blocks on it. Leaning against the blocks, forming a ramp of sorts, were two paper airplanes. Hanging in the air above each paper airplane was a ball (one was an actual ball and the other was a crumbled piece of paper). To speed up the tutorials, all models and code were actually provided for us. We just copied them into the correct location. We did, however, go over the code so we understood what was happening.

The first thing we did was just create a new scene in Unity and replace the default camera with a “holographic camera”, which was provided as part of the holographic platform. Once we created the scene and added our 3-D model, we downloaded it to the device and checked it out.

Now, there was currently no coding so the moment the application started, the 3-D model would just be floating in front of me. I started to learn, with help from our guide, to turn toward the open area behind me and have the guide start the program. Once the program started I unplugged the cable and moved around. The first thing I noticed is that I had to stand about 10 feet away in order for the image to not be clipped. This revealed to me that HoloLens has a somewhat limiting field of view. If I got close to the model, it wouldn’t all fit on my “screen”. I also noticed that if I got too close to the model, I would see through it. I was told that this is something that can be adjusted within Unity. However, I was able to lean in to about a foot away from the crumbled paper; close enough to read the text on it.

What I understood to be happening is that the 3-D scene created in Unity was executing on HoloLens. The holographic camera I added actually seemed to represent the device itself. Therefore, when you are wearing HoloLens and viewing the scene, you are seeing it from the camera’s point of view. This allows you to walk around the scene and “interact” with it as if you were part of it. This concept became more noticeable, and more important to the experience, as the tutorials continued.

Gaze

Next, we were introduced to what is known as “gaze” in the holographic platform, which they explained allowed you to determine what you were currently looking at. Our demo for this was to add an asset that represented a circular cursor. We added code so that the cursor would only appear if we were looking at a virtual object that was part of the model we initially created. The actual determination of whether an object was being looked at was handled by ray tracing.

Additionally, and to provide as much feedback as possible, I noticed ghost images. I have seen this on other devices that Skylight supports so it could be something related to my eyes or just an aspect of viewing images in this manner. I simply would see the model in double, slightly offset and transparent. However, I got used to this and I had to really be looking for it to see it.

Once we put the HoloLens back on to test the updates I became immediately aware that “gaze” really means “head tracking,” since nothing currently tracks where your eyes are actually looking. You must move your head and aim it in the direction of an object. I was familiar with this concept since this is how we interact with our Skylight product running on Epson Moverios. The head tracking was accurate and fluid, and the code that leveraged it was extremely simple.

Gesture

We went back into Unity and added some more scripts and reviewed some more code in Visual Studio. This time we applied physics to the two balls that were floating above our model. The code we added would not apply this physics until we “gazed” at the ball and used a hand gesture to “select” it. The select gesture is built-in, and represents holding the hand out, raising your index finger and moving it down, like clicking on a mouse button in the air. This gesture is part of the holographic platform. We were led to believe that it is one of several available gestures and that the platform is actually capable of providing low level information (such as position of the hand and fingers) that can be used to create custom gestures.

After trying this out a little I noticed that while the field of view might seem cropped or small, I was able to extend my hands out and up, near the limits of my peripheral vision, and the gesture would still be detected and recognized. I never got a false negative or positive, and I made many attempts. Basically, when I looked at one of the balls and made the gesture, it would drop, roll down the paper airplane, roll across the table and drop into space. I say “into space” here because our scene did not recognize anything in the real world.

Voice

In order to experiment with the third and final supported input mechanism for the platform, we added voice control that would reset the scene once we said a key phrase. The biggest take away here is that no training was required and that the call we used was simply KeywordRecognizer.Instance.AddKeyword(“Reset”) and then handle the event fired. We could change the keyword to anything we wanted.

When I tested, it had no issues hearing me say the keyword. For the record, I have no idea where the microphone is and they wouldn’t tell me. I also noticed that some people in the room had thick accents and they did not seem to struggle getting the keyword recognized, but they did have to be careful about what the keyword was.

hl3

Spatial Sound

At this point I think they were just trying to show off. Honestly though, the spatial sound concept worked really well and was one of the more impressive features. All we did was add ambient music to the notepad object, and impact and rolling sounds to the spheres.

Now, as I ran the update project I could hear the ambient music. The powerful part is that as I moved around, the sound appeared to behave as if it was actually emanating from the virtual notepad object. It got louder as I got closer and moved from ear to ear as I walked around the scene. When I selected a sphere to drop it onto the scene, I would now here a small impact sound as it collided with the other objects and a rolling sound as it moved.

This concept could be used to add a little more realism to a virtual environment. Furthermore, the use of sound could help overcome the reduced field of view. For example, an object containing information about some work-related alert could emit a noise. The noise would let the user know that some important information is available and they could simply turn their gaze toward the sound to see this information.

Spatial Mapping

Testing this part of the holographic platform required no coding. I just dropped the “SpatialMapping” object from the toolkit into my Unity scene. Since the default for this object is hidden, we were instructed to change it to mesh so we could see it. It took less than a minute to make this update but the impact was amazing.

Now as I looked around, I could see a mesh overlaying everything I was looking at. It almost felt like a blanket was laid over each object (such as the couch) and then pulled tight against it. After a second or two of glancing in a direction, the mesh would form tightly against everything the HoloLens could see. This caused the real world objects (for me it was my shoe, the table, a couch, and a swivel chair) to now be part of the 3-D scene.

The feeling of this was amazing. I did have a few artifacts, mesh attached to nothing floating in front of me, but I believe this was an issue with Unity’s engine, as some guys from Unity used my device to experience my issue and then had a little huddle where they whispered and looked about nervously. I was not turned off by this because I understand it is still early and what I was seeing was far more evolved then I imagined.

Finale

To wrap everything up we quickly made a change to the project to let us place the scene (the notepad and other objects) onto a real world surface. Now when I ran the application, I could select the notepad and then select the table to stick the notepad to the table. I even stuck it to the top of someone’s head at one point.

After that, we did one last thing for fun, and I have to say it was pretty interesting even in its simplicity. We were instructed to drag a new object onto our scene and add a little more code. They were being quiet about what we were doing so we would be surprised. The object we added was called “Underworld” and the code hinted that the notepad objects was going to be swapped out with this new object.

Once the program was running, I placed the notepad on the floor and selected the balls to knock them down like I have been doing. This time there was a little explosion when the balls hit the notepad and the notepad and all objects were destroyed and replaced by what appeared to be a jagged hole in the floor. I could see a hint of bright green inside the hole, and as I got closer I could see what looked like mountains and even a bird occasionally flying by. They best way to describe this is that the object we added was a large square that was filled with natural objects (which included clouds, a river, and mountains). As I moved around the hole, I could see more and more of this “underworld.” I even got down on the floor to view it at an angle and could see the clouds above the mountain. To be honest, I even stepped over the hole because I didn’t want to step “in” it.

Mind Blown!

Final Thoughts

Most of the time, the device was connected via USB. I only unplugged it once the program was downloaded, when I wanted to walk around. HoloLens never seemed to get hot (or even warm) and I didn’t see or hear about any battery issues. The device is definitely large enough to allow for a significantly sized battery. I asked about the battery but our guide was not allowed to provide us any answers to questions related to the hardware specifications.

I did not find wearing the device or taking it off and putting it back on uncomfortable or problematic. Nor did I find it too heavy or feel like it was going to slip off. Even when I got down on the ground, it did not seem awkward to me.

Overall I greatly enjoyed the experience. I fully understand that it was completely controlled by Microsoft, and even somewhat contrived, but that doesn’t really affect my enjoyment. I am truly excited about this product and what this product will inspire.