Developer Spotlight: Offloading Computer Vision Workloads to the Telco Edge for Unity Applications

September 24th, 2020

Headshot for Jeff James

Jeff James

Director of Content Marketing

We’re starting a new blog series that will shine the spotlight on some of the projects that our internal software engineers are working on, as well as some of the projects that our external development partners have in the works.

In this first installment of the MobiledgeX Developer Spotlight, we’re chatting with Ahmed Hassan, a Software Engineer here at MobiledgeX. Ahmed has been working on an open source Computer Vision solution for Unity that connects to an OpenCV server deployed on the  MobiledgeX Edge Computing platform. 

Ahmed explained that part of the reason he decided to create this solution was that “Unity is popular for its amazing visualizations and its cross platform approach...however, Unity is single-threaded...which means all the work has to happen on the main thread. This can affect other processes running in the application...so delegating complex procedures to the Edge server allows Unity developers to refactor their game/app design to deal with Unity as a Visualization tool  and dedicate the mobile phone cpu for lighter processing tasks.”

Read on to find out the what, why, and how of Ahmed’s project, then check out his technical how-to article and video on how you can add MobiledgeX Computer Vision to your Unity Project.

Q. What are you trying to accomplish with your project?

A: Creating a Unity Component that connects to the computer vision server deployed on MobiledgeX that has capabilities such as (Object Detection - Face Detection), which harness the minimal latency of Edge Computing, The application can communicate with the Edge server either through REST requests or WebSocket messages.

Using the MobiledgeX Unity SDK, the computer vision component connects to the closest edge cloudlet where the application instance is deployed and performs computer vision AI on either the incoming video feed or camera feed.

While there are other unity applications that have computer vision capabilities, the difference is that most of these applications use on-device processing, which affects the application performance (frame per second performance ) and drains the mobile phone battery.

The goal is mainly to provide a computer vision service with a good user experience by running all the computer vision AI heavy processing (especially with the GPU enabled flavor on MobiledgeX Console) on the Edge  and using Unity solely as a presentation layer.

Q. What unique features does MobiledgeX Edge-Cloud R2 provide that makes your project possible?

A: I used two components of MobiledgeX Edge Cloud R2 (MobiledgeX Console) & (the Unity SDK)

For MobiledgeX Console:

It is super easy to create an organization, select a region, and upload a ComputerVision docker image to an edge cloudlet.

Also, through the MobiledgeX console, I was able to monitor my backend in realtime and make use of the extensive logs to debug any problems I had with the ComputerVision server.

For the Unity SDK, the SDK has two interesting features:

  1. I can connect to the Edge server (Application Instance)  with just 5 lines of code.

  2. The SDK comes with an WebSocket implementation that enables communication with my Edge server easily.

Q. Why would someone want to use the approach you did? What benefit would it give them?

A: Unity is popular for its amazing visualizations and its cross platform approach (build once at publish for all), However Unity is single threaded (at least classic MonoBehaviour Unity), which means all the work has to happen on the main thread. Consequently any Computer Vision processing happening on the client device will affect other processes running in the application (animation, particle systems, etc.).

Delegating complex procedures to the Edge server allows Unity developers to refactor their game/app design to deal with Unity as a Visualization tool  and dedicate the mobile phone cpu and gpu for lighter workloads.

Also, this approach eliminates the burden of worrying about network latency and jitter when you know the application user will  be connected to the Edge server deployed in their city. 

Q. Who would be the specific audience for your how-to article? 

Mainly Unity developers, But generally, developers who are interested in building complex apps on mobile phone devices and face the obstacle of mobile phone CPU limitation that results from on-device processing (such as battery draining or poor user experience on older phones).

Q: What inspired/motivated you to create this example?

Computer vision is exciting to any app developer, the fact that there is a plug and play server I can deploy on MobiledgeX and see the computer vision magic happen with the Edge was an intriguing opportunity for me.

I didn't have to do a lot of research since [MobiledgeX engineer] Bruce Armstrong created an easy to implement tutorial on How to use the ComputerVision server with REST.

Also, on mobiledgex/edge-cloud-sampleapps  the computer vision server code is super easy to understand and comes with  different client implementations (tcp client, web socket client) written in python, which was easy to convert to C# for unity.

____________________________________________________________________________

Thanks for sharing Ahmed! If you are interested in using his solution, make sure to check out Ahmed’s technical how to article. And if you are interested in applying for early access to try out the MobiledgeX platform, you can apply on the MobiledgeX Developer Portal.