#MachineLearning – Windows ML Hello World or how to create and UWP App which use a local ONNX model


Following the series of Windows Machine Learning posts, today I will review a bit one of the sample apps that we can find among the examples of Windows Universal Samples in GitHub.

Let’s start with one of the most interesting [SqueezeNet Object Detection Sample].

This sample uses a model called SqueezeNet that allows you to identify the predominant object in an image. The SqueezeNet model is trained to recognize more than 1000 objects and the truth is that it works quite well. For example, let’s see what values ​​return using my keyboard and a cup.


Fairly good. However, in front of a bad picture of Venom, the result is not so good.


Well, time to analyze the code a bit. The first thing we have to keep in mind is that we need to use a version of Windows 10 equal to or higher than 17110. Currently this means being part of the Windows Insiders program.


I’m working with this version: 17120.1


To use the SqueezeNet model, our UWP app uses 2 files

  • Labels.json. It contains a list with all the labels (objects) registered in the model
  • SqueezeNet.onnx. ML model

The UWP App uses the new API [Windows.AI.MachineLearning.Preview] to load the ONNX model, then links an input image and processes the output of the evaluation to determine the labels found.

In the next post I will analyze the C # code of the App a bit because I was surprised at how simple the operation is. Here goes as a gift

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.AI.MachineLearning.Preview;
using Windows.Storage;
using Windows.Media;
using Windows.Graphics.Imaging;
using System.Threading.Tasks;
using Windows.Storage.Streams;
using Windows.UI.Core;
using Windows.Storage.Pickers;
using Windows.UI.Xaml.Media.Imaging;
namespace SqueezeNetObjectDetection
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
private const string _kModelFileName = "SqueezeNet.onnx";
private const string _kLabelsFileName = "Labels.json";
private ImageVariableDescriptorPreview _inputImageDescription;
private TensorVariableDescriptorPreview _outputTensorDescription;
private LearningModelPreview _model = null;
private List<string> _labels = new List<string>();
List<float> _outputVariableList = new List<float>();
public MainPage()
/// <summary>
/// Load the label and model files
/// </summary>
/// <returns></returns>
private async Task LoadModelAsync()
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => StatusBlock.Text = $"Loading {_kModelFileName} … patience ");
// Parse labels from label file
var file = await StorageFile.GetFileFromApplicationUriAsync(new Uri($"ms-appx:///Assets/{_kLabelsFileName}"));
using (var inputStream = await file.OpenReadAsync())
using (var classicStream = inputStream.AsStreamForRead())
using (var streamReader = new StreamReader(classicStream))
string line = "";
char[] charToTrim = { '\"', ' ' };
while (streamReader.Peek() >= 0)
line = streamReader.ReadLine();
var indexAndLabel = line.Split(':');
if (indexAndLabel.Count() == 2)
// Load Model
var modelFile = await StorageFile.GetFileFromApplicationUriAsync(new Uri($"ms-appx:///Assets/{_kModelFileName}"));
_model = await LearningModelPreview.LoadModelFromStorageFileAsync(modelFile);
// Retrieve model input and output variable descriptions (we already know the model takes an image in and outputs a tensor)
List<ILearningModelVariableDescriptorPreview> inputFeatures = _model.Description.InputFeatures.ToList();
List<ILearningModelVariableDescriptorPreview> outputFeatures = _model.Description.OutputFeatures.ToList();
_inputImageDescription =
inputFeatures.FirstOrDefault(feature => feature.ModelFeatureKind == LearningModelFeatureKindPreview.Image)
as ImageVariableDescriptorPreview;
_outputTensorDescription =
outputFeatures.FirstOrDefault(feature => feature.ModelFeatureKind == LearningModelFeatureKindPreview.Tensor)
as TensorVariableDescriptorPreview;
catch (Exception ex)
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => StatusBlock.Text = $"error: {ex.Message}");
_model = null;
/// <summary>
/// Trigger file picker and image evaluation
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
private async void ButtonRun_Click(object sender, RoutedEventArgs e)
ButtonRun.IsEnabled = false;
UIPreviewImage.Source = null;
// Load the model
await Task.Run(async () => await LoadModelAsync());
// Trigger file picker to select an image file
FileOpenPicker fileOpenPicker = new FileOpenPicker();
fileOpenPicker.SuggestedStartLocation = PickerLocationId.PicturesLibrary;
fileOpenPicker.ViewMode = PickerViewMode.Thumbnail;
StorageFile selectedStorageFile = await fileOpenPicker.PickSingleFileAsync();
SoftwareBitmap softwareBitmap;
using (IRandomAccessStream stream = await selectedStorageFile.OpenAsync(FileAccessMode.Read))
// Create the decoder from the stream
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(stream);
// Get the SoftwareBitmap representation of the file in BGRA8 format
softwareBitmap = await decoder.GetSoftwareBitmapAsync();
softwareBitmap = SoftwareBitmap.Convert(softwareBitmap, BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied);
// Display the image
SoftwareBitmapSource imageSource = new SoftwareBitmapSource();
await imageSource.SetBitmapAsync(softwareBitmap);
UIPreviewImage.Source = imageSource;
// Encapsulate the image within a VideoFrame to be bound and evaluated
VideoFrame inputImage = VideoFrame.CreateWithSoftwareBitmap(softwareBitmap);
await Task.Run(async () =>
// Evaluate the image
await EvaluateVideoFrameAsync(inputImage);
catch (Exception ex)
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => StatusBlock.Text = $"error: {ex.Message}");
ButtonRun.IsEnabled = true;
/// <summary>
/// Evaluate the VideoFrame passed in as arg
/// </summary>
/// <param name="inputFrame"></param>
/// <returns></returns>
private async Task EvaluateVideoFrameAsync(VideoFrame inputFrame)
if (inputFrame != null)
// Create bindings for the input and output buffer
LearningModelBindingPreview binding = new LearningModelBindingPreview(_model as LearningModelPreview);
binding.Bind(_inputImageDescription.Name, inputFrame);
binding.Bind(_outputTensorDescription.Name, _outputVariableList);
// Process the frame with the model
LearningModelEvaluationResultPreview results = await _model.EvaluateAsync(binding, "test");
List<float> resultProbabilities = results.Outputs[_outputTensorDescription.Name] as List<float>;
// Find the result of the evaluation in the bound output (the top classes detected with the max confidence)
List<float> topProbabilities = new List<float>() { 0.0f, 0.0f, 0.0f };
List<int> topProbabilityLabelIndexes = new List<int>() { 0, 0, 0 };
for (int i = 0; i < resultProbabilities.Count(); i++)
for (int j = 0; j < 3; j++)
if (resultProbabilities[i] > topProbabilities[j])
topProbabilityLabelIndexes[j] = i;
topProbabilities[j] = resultProbabilities[i];
// Display the result
string message = "Predominant objects detected are:";
for (int i = 0; i < 3; i++)
message += $"\n{ _labels[topProbabilityLabelIndexes[i]]} with confidence of { topProbabilities[i]}";
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => StatusBlock.Text = message);
catch (Exception ex)
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => StatusBlock.Text = $"error: {ex.Message}");
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => ButtonRun.IsEnabled = true);

Happy Coding!

Greetings @ Burlington

El Bruno



Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: