Following the series of Windows Machine Learning posts, today I will review a bit one of the sample apps that we can find among the examples of Windows Universal Samples in GitHub.
Let’s start with one of the most interesting [SqueezeNet Object Detection Sample].
This sample uses a model called SqueezeNet that allows you to identify the predominant object in an image. The SqueezeNet model is trained to recognize more than 1000 objects and the truth is that it works quite well. For example, let’s see what values return using my keyboard and a cup.
Fairly good. However, in front of a bad picture of Venom, the result is not so good.
Well, time to analyze the code a bit. The first thing we have to keep in mind is that we need to use a version of Windows 10 equal to or higher than 17110. Currently this means being part of the Windows Insiders program.
I’m working with this version: 17120.1
To use the SqueezeNet model, our UWP app uses 2 files
- Labels.json. It contains a list with all the labels (objects) registered in the model
- SqueezeNet.onnx. ML model
The UWP App uses the new API [Windows.AI.MachineLearning.Preview] to load the ONNX model, then links an input image and processes the output of the evaluation to determine the labels found.
In the next post I will analyze the C # code of the App a bit because I was surprised at how simple the operation is. Here goes as a gift
Greetings @ Burlington