In my previous post, I shared an example where I analyzed the camera feed using a Image Recognition model created using Custom Vision. Today I’ll expand the sample, and show in real time the detected MVPs logos with a frame in the drone camera feed.
Let’s take a look at the demo working in the following image.
In the top of the image, we can see the app console log, with the information received for each analyzed frame. When an image is detected, we can see the tag, the probability and the bounding box coordinates.
In order to position the frames in the correct location, I need to make some math using the current camera and image size and the returned bounding box values for, height, left, top and width. Lines 87-110.
I’ll be sharing some experiences and insights around Machine Learning, Computer Vision and IoT. Here are my session details.
How a PoC at home can scale to Enterprise Level using Custom Vision APIs (v2!)
It all started with a DIY project to use Computer Vision for security cameras at home. A custom Machine Learning model is the core component used to analyze pictures to detect people, animals and more in a house environment. The AI processing is performed at the edge, in dedicated hardware and the collected information is stored in the cloud.
The same idea can be applied to several CCTV scenarios, like parking lots, train stations, malls and more. However, moving this into enterprise scale brings a set of challenges, which are going to be described and explained in this session.
In this new version of the session, we will start from scratch and create a complete “Parking Garage Open Space Tracker” solution with live devices and live cars (small ones, of course)
I’m updating some of my demos for Microsoft Ignite and I found an amazing new feature in Custom Vision: Suggested Tags (see references). This feature is super useful in scenarios for automatic detection, like the parking lot demo. I’ll use the official documentation to describe this feature
When you tag images for a Custom Vision model, the service uses the latest trained iteration of the model to predict the labels of untagged images. It then shows these predictions as suggested tags, based on the selected confidence threshold and prediction uncertainty. You can then either confirm or change the suggestions, speeding up the process of manually tagging the images for training.
usual, let’s use 2 images to describe this. Once I add a new image to my Custom
Vision project, I can start to select object and tag them. However, if I
already trained my project, I will also see the [Suggested object on] option.
default threshold value of 66%, the auto label feature does not detect any
area. However, if I low the level, in example to 28%, it will automatically
detect one of the parking slots: Slot 3. Once I’m happy with the suggested
objects, I can confirm these objects and that’s it! Super easy.
This feature is amazing, and I’m looking forward to using it in real projects to see how much time saves in image labeling scenarios.
Bonus: Below you can see the before and after of the demo project. My daughter also decorated the new parking lot box, with some IronMan content. I’ll need to figure out how to connect this with my session speech!
On my Custom Vision samples, I usually send an image to a CustomVision.ai
HTTP Endpoint, and I process the Json result. The results are very easy to
understand, however, I created a C# converter class to help with the Custom
To create this class I navigate: http://json2csharp.com/, and paste a sample
result and make some changes on the result. The output and useful class is this
It’s a very simple class, and the best way to describe it,
is to show an usage scenario
The main remarks points are
Lines 1 to 9, open a local file, create a HTTP client and make the HTTP Post request
Line 12, convert the json response (string) to a C# object and then get the best prediction
Where the best prediction is a single Linq code sorting the predictions by probability and selecting the 1t one.
these in the airplane, and also drafting a bigger post about the full and
amazing experience at the Caribbean Developer Conference. So, I’ll start with
the usual slides and materials, and also use this post later as reference for
the full experience