Windows 10 and YOLOV2 for Object Detection Series
- Introduction to YoloV2 for object detection
- Create a basic Windows10 App and use YoloV2 in the camera for object detection
- Transform YoloV2 output analysis to C# classes and display them in frames
- Resize YoloV2 output to support multiple formats and process and display frames per second
Now we already have downloaded the YoloV2 ONNX file. We can create an UWP App to use the model. Here’s the step by step and some comments
1. We create a new Windows 10 UWP App. As always the name is super original
2. We are going to use the camera as images source to be processed with YoloV2. There are many examples of how to do this. In this case we will use a control of Windows Community Toolkit V 3.0 (as I wrote in a previous post). We add the following Packages Via NuGet
3. We add a control CameraPreview In our Xaml and we’re already 2 lines away of having the camera working.
4. We enable the necessary permissions to use the camera in the app manifest. With The following lines our app have a functional camera
5. The time has come to start using YoloV2. The latest versions of Visual Studio 2017 allow us to import an ONNX model into a project and the IDE will create the classes needed to work with it. We add as an existing file to the file [Tiny-YOLOv2. Onnx].
Our solution should be similar to the following
6. The class that is generated in VS to work with the model It’s pretty ugly.
7. My suggestion, replace [8d4d0fa662b14686b1865e0e6d3c598eByTinyYoloV2]
8. Now we can see the generated class and within it we can find 3 classes
- An Input class for the Model
- An Output class for the Model
- A class to work with the Model
It’s time to load the model into our App. We do this before initializing the camera as shown in the following Code
9. Because we will use the model as part of our App, we have to define that it be treated as a Content in the process of Build
10. The next step is to evaluate a Frame Of the camera with the YOLOV2 model. Here the code shows us how we use the classes we create when we import the model.
11. Finally a Break Point is an useful way to show the output of the model and the information of the model output, which we will have to process later.
Well, at this point we are already using the model YoloV2 to analyze the frames of the camera. In the following posts I’ll write on how to work with the output of the model to get information which makes sense to us!
Important: At some point I should write about the model conversion capabilities that Visual Studio Tools brings to us For AI.
Greetings @ Toronto
- YOLO: Real-time object detection
- YOLO9000: Better, Faster, Stronger by Joseph Redmon and Ali Farhadi (2016)
- ONNX Tools
- Azure AI Gallery, Tiny YOLO V2
- El Bruno, Windows Community Toolkit V 3.0 makes life incredibly easy if you need working with the camera in a UWP App
- Visual Studio Marketplace, Visual Studio Tools for AI