#Event – Resources used with @ivanatilca during the “Lessons Learned creating a multiplatform AI project for Azure Kinect and Hololens 2” for the Global XR Talks

Buy Me A Coffee

Hi !

We had an amazing time last week with Ivana at the Global XR Talks, sharing some of our lessons learned creating an Mixed Reality app to work in Hololens 2 and Azure Kinect.

As usual, now it’s time for slides and code

Slides

Code

The main scripts for Camera and Cognitive Services interaction are available here https://github.com/elbruno/events/tree/main/20200806%20Global%20XR%20HL2%20to%20Azure%20Kinect%20Lessons%20Learned

Session Recording

Resources

#Unity3D – Making a CustomVision.ai HTTP Post call to have a better #MRTK experience with #CognitiveServices @ivanatilca

Buy Me A Coffee

Hi !

Quick post today, with mostly sample code. And, it’s all about a scenario that we faced with Ivana a couple of days ago while we were using MRTK and we were trying to use some Cognitive Services.

As of today, not all the services in Cognitive Services are supported and have official Unity3D support. At the end, it’s not a problem, we can just make an HTTP Post call, and that’s it. However, this is not as easy as is supposed to be.

So, after facing some issues with the System.Net.HttpClient library, I decided to use UnityWebRequest. This library main objective is to work with HTTP Forms, however we can manage to send an image with a sample like this:

string DetectImage(byte[] image, string imageUrl)
{
string body = string.Empty;
using (var request = UnityWebRequest.Post(imageUrl, ""))
{
request.SetRequestHeader("Content-Type", "application/octet-stream");
request.uploadHandler = new UploadHandlerRaw(image);
request.SendWebRequest();
while (request.isDone == false)
{
var wfs = new WaitForSeconds(1);
}
if (request.isNetworkError || request.isHttpError)
{
Debug.Log(request.error);
}
else
{
body = request.downloadHandler.text;
}
}
return body;
}

As we can see in the previous post, there is no async / await support here. So I added a couple of ugly lines of code to support this. We can improve this to have a timeout or delay applied here. As for this sample, this works great.

As a bonus, you can watch the full presentation in the Global XR YouTube channel here

Happy coding!

Greetings

El Bruno

References

#Hololens2 – Project sizes using #MRTK

Buy Me A Coffee

Hi !

Quick post today. When I started with Hololens 1, Unity and the Hololens Toolkit a couple of years ago, I did this exercise to have a sense of the required disk space for my projects and tests.

I realize that I didn’t blogged this, so here is an updated version using the latest versions. Here are the tools and versions I’m using:

  • MRTK 2.4
  • Unity 2019.4.6f1
  • Visual Studio 2019 Preview (16.7.0)

And using this tools I follow the steps defined in “Initializing your project and deploying your first application” (see references). Let’s take a look at the disk usage.

  • Create an empty 3D project with Unity.
    Disk Size: 49 MB
  • 2. Build the project in Unity.
    Disk Size: 200 MB
  • Import the MRTK 2.4.0 Foundation package in Unity.
    Disk Size: 332 MB
  • Build the project in Unity.
    Disk Size: 668 MB
  • Build the output in Visual Studio 2019 for the HL2.
    Disk Size: 3 GB
  • Apply MRTK Configuration and Settings for a HL2 in Unity and Build.
    Disk Size: 3.10 GB
  • Build the output in Visual Studio 2019 for the HL2.
    Disk Size: 3.68 GB

So, as you can see, a ready-to-go final project will require around 3.5 GBs disk space. Get ready your credit card for a new SSD !

Happy coding!

Greetings

El Bruno

Resources

#WindowsML – Create Native AI apps for #Hololens #Windows10

Hi!

A couple of weeks ago I wrote about a new feature in Windows 10 to be able to use ML models natively in W10 Apps. For me, that would mean a breakthrough in the entire Windows 10 Device ecosystem. In example, as we already know, the new version of Hololens incorporates a chip specially dedicated to AI tasks (DNN specifically),
so I assumed that in Hololens V2 we could start using Windows ML using the device capabilities.

What I did not expect is that in the Preview version of Windows 10 RS4 for Hololens we already had access to that feature. I have not had time to update my Hololens to RS4, however, a couple of community experts have shared more than interesting experiences.

For example, let’s see this video by Rene Schulte, where he shows an App that performs a real time image classification using the SqueezeNet ONNX model

Mike Taulty also offers a series of posts where he does an excellent research job in Windows ML and Hololens. He started on how to create a model using Azure Custom Vision, export it to CoreML and then create an ONNX model.

Important: currently ONNX is the type of models supported by Windows ML. It is possible to export CoreML models, TensorFlow, CNTK and other frameworks to ONNX.

From here, Mike comments on how to work with UWP and C # with the exported model. In subsequent posts, Mike also explains how, thanks to Hololens RS4 Preview, these capabilities can also be used in applications for Hololens (see references)

As well, If you dare to upgrade your Hololens to RS4 Preview, it is a very interesting moment to start thinking about scenarios where you can combine WinML and Hololens!

Happy Coding!

Greetings @ Toronto

El Bruno

References

#WindowsML – Ya se puede crear apps con AI nativo en #Hololens #Windows10

 

Buenas!

Hace un par de semanas comente que la nueva capacidad en Windows 10 de poder utilizar modelos de ML de forma nativa en W10 Apps significaría un gran avance en todo el ecosistema de Devices de Windows 10. Como ya sabemos que la nueva version de Hololens incorporar un chip especialmente dedicado a tareas de AI (DNN específicamente), pues era de suponer que en Hololens V2 podríamos comenzar a utilizar Windows ML.

Lo que no esperaba es que en la version Preview de Windows 10 RS4 para Hololens ya tuviésemos acceso a esa capacidad. No he tenido tiempo de dedicarle a mis Hololens para actualizarlas a RS4, sin embargo, grandes referencias han compartido experiencias más que interesantes.

Por ejemplo, veamos este video de Rene Schulte, donde muestra una App que en tiempo real realiza una clasificación de imágenes utilizando el modelo ONNX SqueezeNet

Mike Taulty también ofrece una serie de posts donde realiza un trabajo excelente al respecto. En primer lugar, comenta como crear un modelo utilizando Azure Custom Vision, exportar el mismo a CoreML y luego crear un modelo ONNX.

Importante: actualmente ONNX es el tipo de modelos soportados por Windows ML. Es posible exportar modelos de CoreML, TensorFlow, CNTK y otros frameworks a ONNX.

A partir de aquí, Mike comenta como trabajar con UWP y C# con el modelo exportado. En siguientes posts, Mike también explica cómo, gracias a Hololens RS4 Preview, también se pueden utilizar estas capacidades en aplicaciones para Hololens (ver referencias)

Pues bien, si te animas a poner una version Preview en Hololens, es un momento muy interesante para comenzar a pensar en escenarios donde combinar WinML y Hololens!

Happy Coding!

Saludos @ Toronto

El Bruno

References

#WinML – #GamingML created for #Windows10 gamers (#Unity3D and #Hololens V2 included!)

Hi!

First of all, I’ll start by saying that I finally found a hashtag for Windows Machine Learning posts: [#WinML]. A small summary of WindowsML (actually from the part that interests me)

The WinML API allows us (Windows 10 developers) to use Machine Learning trained models and make inferences with them on a wide variety of hardware (CPU, GPU, VPU). An ML programmer may choose a Framework, such as CNTK, Caffe2 or Tensorflow, and with it he could build and train a ML model.

That model would then be converted to the Open Neural Network Exchange (ONNX) a format co-developed between Microsoft, Facebook and Amazon. As of this moment, a Windows 10 application can use this ML model internally as part of the App.

This in itself is great, however, there is a new layer more specifically created for games: DirectML. DirectML is built on top of pf Direct3D and represents a special layer for Gaming scenarios that provides Hardware Acceleration GPU for WindowsML operations.

In the references I left the article where all the technical details of DirectML are covered.

WinMLArchitecture

Another interesting point of the news is the announcement of support for Unity ML-Agents. Native WindowsML / DirectML integration will be available for Windows 10 games created with Unity 3D.

Personally, I did not know the Unity Machine Learning Agents. After reading a bit about them, it is clear to me that in the near future, the games and Apps created with Unity3D will be completely different from what we know today.

Bonus

 

It’s time to start connecting dots and entering the area of ​​speculation. A while ago it became public that the new version of Hololens will have a new chip specifically dedicated to operations of [Deep Neural Networks] (see references). From a generic point of view, Hololens is just another type of Windows 10 device.

If we assume that Hololens V2 uses a version of Windows 10 that supports out of the box the capabilities of Windows ML, we already have a Mixed Reality Headset with impressive capabilities where Apps can use ML natively (the chip is already there!).

From the point of view of a C # developer, this is great!

Happy Coding!

Greetings @ Burlington

El Bruno

References

 

#WinML – #GamingML ideal para los Gamers en #Windows10 (#Unity3D y #Hololens V2 included!)

Buenas!

En primer lugar, comenzare diciendo que por fin encontré un hashtag para los posts sobre Windows Machine Learning: [#WinML]. Un pequeño resumen de WindowsML (en realidad de la parte que me interesa a mi)

La API WinML nos permite utilizar modelos entrenados de Machine Learning y realizar inferencias sobre la amplia variedad de hardware (CPU, GPU, VPU). Un ML Expert elegiría un Framework, como CNTK, Caffe2 o Tensorflow, y con el mismo podría construir y entrenar un modelo de ML.

Ese modelo luego se convertiría al formato de Open Neural Network Exchange (ONNX), co-desarrollado entre Microsoft, Facebook y Amazon. A partir de este momento, una aplicación Windows 10 puede utilizar internamente este modelo de ML como una parte más de la App.

Esto de por si es grandioso, sin embargo, hay una nueva capa más específicamente creada para los juegos: DirectML. DirectML está construido sobre Direct3D y representa una capa especial para escenarios de Gaming que provee GPU Hardware Acceleration para las operaciones de WindowsML.

En las referencias he dejado el articulo donde se cubren todos los detalles técnicos de DirectML.

WinMLArchitecture

Otro punto interesante de la noticia es el anuncio del soporte para Unity ML-Agents. La integración nativa de WindowsML / DirectML estará disponible para los juegos para Windows 10 creados con Unity 3D.

En lo personal, yo no conocía los Unity Machine Learning Agents. Despues de leer un poco sobre los mismos, me queda claro que en un futuro cercano, los juegos y Apps creados con Unity3D serán completamente diferente a lo que conocemos hoy.

Bonus

Es momento de comenzar a atar cabos y entrar en la zona de especulaciones. Hace un tiempo se hizo publico que la nueva version de Hololens tendrá un nuevo chip específicamente dedicado a operaciones de [Deep Neural Networks]. Desde un punto de vista genérico, Hololens otro tipo de dispositivo Windows 10.

Si asumimos que Hololens V2 utilizara una version de Windows 10 que soporte out of the box las capacidades de Windows ML, pues ya tenemos un Mixed Reality Headset con capacidades impresionantes donde las Apps pueden utilizar ML de forma nativa.

¡Desde el punto de vista de un C# developer, esto es grandioso!

Happy Coding!

Saludos @ Burlington

El Bruno

References

 

#Event – Mixed Reality Workshop

giphy

Hi !

Next March 17 I will be part of the team in the Mixed Reality Workshop. It is a full day of sessions to help understand how to create Mixed Reality Apps with Windows 10, Unity 2017.2 and Visual Studio 2017. The agenda is as follows

  • Intro to Windows Mixed Reality
  • Building a mixed reality experience
  • Placing a virtual object in a 3D space
  • Interacting with virtual objects
  • Handling user movement in a 3D space
  • Deploying and building a custom 3D launcher
  • Q&A/Wrap-up/Where to go from here

Happy Coding!

Greetings @ Toronto

El Bruno

References

#Event – Mixed Reality Workshop

giphy

Buenas!

Pues que el próximo 17 de Marzo seré parte del equipo en el Workshop de Mixed Reality. Es un dia completo de sesiones para ayudar a comprender como crear Mixed Reality Apps con Windows 10, Unity 2017.2 y Visual Studio 2017. La agenda es la siguiente

  • Intro to Windows Mixed Reality
  • Building a mixed reality experience
  • Placing a virtual object in a 3D space
  • Interacting with virtual objects
  • Handling user movement in a 3D space
  • Deploying and building a custom 3D launcher
  • Q&A/Wrap-up/Where to go from here

Happy Coding!

Saludos @ Toronto

El Bruno

References

#Hololens – How to use clases TapToPlace and InitialScanManager

Hi !

Yesterday I wrote about a custom solution I made to have Tap to Place feature in an Hololens App. The main set of features was

  • We can perform an initial scan of an environment, based on a time constraint. Also we can continue the scan process if we don’t find a minimum of number of planes for floor or wall
  • We can associate one class to an hologram to have the Tap to Place feature
  • This process also allow us to validate if we can place the hologram in a vertical (wall) or horizontal (floor) plane
  • The validation process also have some visual features like change the color of a box collider from red to green depending if we can or can’t place an object and also cast a shadow with the final location of the hologram

The “InitialScanManager.cs” is the one I use to perform the initial environment scan. I usually add this class to an Empty Game Object. The main properties to use are this ones:

  • Limit Scanning by Time and Scan Time, definen how the scan process will be performed. If we set a Scan Time value of N, and check the Limit Scan time, the app will scan the environment for N seconds and then will stop the scan process
  • Minimum Floor and Minimum Wall defines the minimum amount of frames to be searched in the floor and the wall
  • Player, this is optional for the main hologram. When the scan process start it will be hide and when the scan process ends the player will be show again
  • DebugTextPlayer, optional, a TextMesh used for debug actions

Clipboard02.png

The class “TapToPlacePlayer.cs” is the one we must associate to the element where we want this feature. The main properties are

  • Placeable Bounds and Non Placeable Bounds Materials are materials used when we start the “place mode”. I create a collider with this materials and the class change the materials based on if we can place or not the hologram at the selected position
  • Placeable Shadow and Non Placeable Shadow Materials are materials used to display the final position to be placed for the hologram
  • Display Shadow defines if we want to show (or not) the final position of the hologram
  • Placement Surface defines the positioning mode Horizontal or Vertical. I mean Floor or Wall
  • DebugTextPlayer, optional, a TextMesh used for debug actions

Clipboard04.png

The complete source code can be downloaded from GitHub (link).

Greetings @ Toronto

El Bruno

References

El Bruno, my posts