MobiledgeX Unity SDK 2.1.2 with Magic Leap One

September 17th, 2020

Headshot for Garner Lee

Garner Lee

Principal Engineer

Augmented Reality (AR) is changing the way we are entertained, learn, receive healthcare, and advance industry. Augmented Reality adds additional information to your view of the real world in real-time, while Virtual Reality replaces your observed environment with a simulated one.

The developers at MobiledgeX obtained a few Magic Leap One devices and placed them on the Edge. Why did we place AR headsets on the Edge? Because the interesting thing about wearable AR headsets is that both the GPS location awareness of a cell phone and AR spatial mesh technology could potentially be combined. With the cooperative use of an AR headset and a cell phone as a WiFi hotspot to run MobiledgeX’s FindCloudlet, we connected to the closest edge-enabled backend server. The cell phone also runs a small HTTP server that among other things, supplies fine grain GPS Location information to the magic leap device. Our Magic Leap proof of concept Unity application connected to our Face Detection servers with excellent results we’d like to share.

The MLCamera uses the same camera as the AR Capture mode, and cannot capture at the same time, so this is recorded through one of the stereo headset displays. Here is our result pointed at “faces” on an image search:

Placing a “Decorator” in AR space

Our face detection servers usually process 2D detection rectangles to recognize faces. The AR headset’s position and rotation information could be used to calculate the relative rotation and position of the detection rectangles, placing them in 3D space.

The Magic Leap One has an outward facing camera that can record the real-world scene in front of the user (MLCamera), and also an AR UnityEngine Camera GameObject prefab asset to provide data used to locate the user headset position.

Currently, the face recognition application places the bounding boxes at a 1-meter (1 Unity Unit) distance in front of the AR headset. For things to look right, we resize and move each detection “decorator” Unity prefab to be in a more appropriate location:

// Set decorator position to camera position
decorator.transform.position = parentCam.transform.position;
// Move 1 unit forward, unknown distance.
decorator.transform.Translate(unitCamForward);

Now, let’s position the detection decorator. The face detection server operates on a 2D screen coordinate system where (0,0) origin is the upper left of the screen, instead of Unity’s screen independent spatial coordinates (0,0,0) origin in 3D space:

// Rotate to match camera rotation:

Quaternion parentRot = parentCam.transform.rotation;
Vector3 parentEuler = parentRot.eulerAngles;
decorator.transform.rotation = Quaternion.Euler(parentEuler.x, parentEuler.y, 0); // no rotation around z-axis

// Reposition r.x and r.y (origin is at top left of screen) in terms of unity's coordinate system (origin is at "center" of screen)
// This offset depends entirely on the webcam physical position.

Vector3 v3pos = new Vector3(
  zscale * (float)(-.5 * widthAtDepth + ((r.x + r.width/2 + xOffset) / imageW * widthAtDepth)),
  zscale * (float)(.5 * heightAtDepth - ((r.y + r.height/2 + yOffset) / imageH * heightAtDepth)),
  0); // left hand coordinate system
decorator.transform.Translate(v3pos); // Translation is relative to own space

We can potentially perform a RayCast to find the Z distance of each detection rectangle. Finding the Z distance allows placing the detection rectangles in the proper location in the spatial mesh, instead of some distance in front of the headset. Some additional calculations are required for the actual view frustum to align its images more precisely with real-world objects and varying camera angles. We noticed during our test that the update speed of the spatial mesh was not exceptionally fast and that it missed smaller objects in the area, like stationary flat monitors, such that the Z distance would be behind the monitor. We have plans to explore this further in the future.

The Face Detection Edge Server

You might be curious to know where our Face Detection Edge Servers reside. The Face Detection Servers reside within the MobiledgeX distributed infrastructure. Using the cell phone during this test gave us the ability to acquire most of the data needed by the Face Detection Edge Server. The data was gathered by querying the Phone application that paired with our Magic Leap Unity Face Detection Application:

Some of edge cloudlet locations also have GPU resources to accelerate computer vision tasks.

If you would like to learn more about FaceDetection, Face Recognition, Object Recognition, and server backends, you can access the various guides here: Face Detection Sample App Guides.

Happy coding!