#Azure – Sending custom Telemetry and Event information from a #RaspberryPi device to #AzureIoT Central

Hi!

Azure IoT Central is one of the amazing services we can use on Azure. I was wondering how easy is to use a Raspberry Pi using Raspbian and Azure IoT and here is my experience.

Let’s assume we had a device up to date using Raspbian, our next step will be to create an Azure IoT Central application. The official step by step is the main resource for this

Create an
Azure IoT Central application (see references)

Once we have our application, we can quickly create a new Raspberry Pi device and use it. However, I’ll do an extra step, lessons learned as a handsome developer

Create a Device Template

Go to [Device Templates] and create a new template

azure iot central create new device template

For Raspberry Pi, I’ll name this [Raspberry Pi Dev]

azure iot central create new device template raspberry pi dev

So now, I can add a new real device, in the Devices section from the left menu

azure iot central raspberry pi dev add new real device

Once you create a new real device, is important to copy and save for later the connection information. To access this, go to the top right [Connect] button

azure iot central raspberry pi dev real device connect information

Almost there, there is an official tutorial that explain how to send random telemetry information with a Python script in a Raspberry Pi. I’ll use it as base for this scenario.

Connect a
Raspberry Pi to your Azure IoT Central application (Python) (see references)

For this demo, I’ll add a custom telemetry property and a custom event to the device. Since I won’t use the device to track temperature, accelerometer, and more, I think it make sense to track some custom information.

So, I’ll go back to my Device Template definition and I’ll add a new Telemetry, named [t1], with the following information.

azure iot central raspberry pi dev new telemetry information

And now, I can run a custom version of my script that will send new telemetry information, for [t1]. Sample in line 18

After a couple of minutes running the sample script, I can see the telemetry information for T1. In this view, I enabled [Temperature] and [T1] to display the timeline.

azure iot central raspberry pi dev real device dashboard telemetry

And, next step will be to add an event, which is also a very important uses case in Azure IoT. Back in the Device Template, I add a new event named [event1]

azure iot central raspberry pi dev new event information

And added some extra lines of code to send also an event between telemetry, Line 22

In the following image, we can see how the events appears in the timeline, and we can also get some extra details clicking on each event.

azure iot central raspberry pi dev real device dashboard telemetry and events

Very cool! Next steps will be to integrate this with some image recognition scenarios.

Happy Coding!

Greetings @ Burlington

El Bruno

References

Advertisements

#Python – Let’s use a #FaceRecognition demo app for a performance comparison between #RaspberryPi3 and #RaspberryPi4

Hi!

I started to do some tests with the new Raspberry Pi 4 and the results are amazing. I’m not a performance expert, so I decided to pick up some of the demos / apps I’ve creating for the Raspberry Pi and run them in both models: Raspberry Pi 3 B+ and Raspberry Pi 4.

I started with an amazing set of tutorials on how to perform Face Recognition from Adrian Rosebrock (see references). I’ve been using his Face Recognition python package for this scenarios and it’s an amazing one.

I added some code to a custom version of Adrian’s Face Recognition sample, and it looks great. The main idea was to track in real-time the current FPS (similar to the work I did with the Image AI and Hololens sample a couple of days ago, see references).

This sample load a file with 15 trained faces and analyze frame by frame to

  • Detect faces in the frame.
  • If a face is detected, draw a frame around it.
  • For each detected frame analyze if the face is a trained face.
  • If the face is part of the trained dataset, the app will add the name of the person on top of the frame.

I display in real-time the FPS processed with a USB camera in a Raspberry Pi 3 B+. Doing a lot of tweaks and getting the best performance in the device I could never process 1FPS. The average processing data were between 0.6 and 0.9 FPS in a Raspberry Pi 3B+.

python face recognition in raspberry py 3 with FPS live

IMHO, these results are great for a small device like a Raspberry Pi 3B+. But now it was time to test it in the new Raspberry Pi 4. And an important note here is to remark that even if I did this tests in a Raspberry Pi 4 with 4GB of Rams, the performance results are similar to a RPI4 with just 1 GB of ram. We have more memory, however the processor improvements are quite significant in the new version.

I installed all the necessary software in the Raspberry Pi 4 and I got 3X better results. I’ve even tun this in a 1080p resolution to get a sense of the real processing time. The average processing data were between 2.3 and 2.4 FPS in a Raspberry Pi 4.

python face recognition in raspberry py 4 with FPS live

Amazing! In this scenario the Raspberry Pi 4 is almost 3 times faster than the Raspberry Pi 3. And again, these are amazing times for a 50USD device.

The sample source code is https://github.com/elbruno/Blog/tree/master/20190819%20Rpi%203%20vs%20Rpi%204%20Face%20Recognition

I even have time for some BBQ time with family and friends!

Happy coding!

Greetings @ Toronto

El Bruno

References

#Python –Detecting #Hololens in realtime in webcam feed using #ImageAI and #OpenCV (thanks to @OlafenwaMoses)

Hi!

Let’s start with a very quick intro:

During the past months, I’ve been playing around with several Image Analysis tools. And ImageAI (see references) is one that deserves a full series of posts. Please take a look at the product and the source code in GitHub, and also please thank the one behind this: Moses Olafenwa (@OlafenwaMoses).

And now, my 2 cents. I’ve started to test ImageAI to create my own image detection models. Most of the times, this is a hard path to do, however ImageAI show me an interesting option.

… with the latest release of ImageAI v2.1.0, support for training your custom YOLOv3 models to detect literally any kind and number of objects is now fully supported, …

Wow! That’s mean that I can pick up my own set of images dataset and train on top of a YOLOv3 and use it as a trained model. Again, this is amazing.

So, I started to read the article [Train Object Detection AI with 6 lines of code, see references] where Olafenwa explains how to do this using a data set with almost 500 rows with images for Hololens and Oculus Rift.

The code is very simple and easy to read. There are also examples on how to analyze a single file, or a video, or even a camera feed. The output for the analysis can be also in a new file, in a processed video or even a full log file with the detected information.

I started to read the code samples and I realized that I’m missing a scenario:

Display the realtime feed from a webcam, analyze each webcam frame and if a device is found, add a frame to the realtime feed to display this.

I use OpenCV to access to my camera, and it took me some time to figure out how to convert my OpenCV2 camera frame to the format needed by ImageAI. At the end, thanks to the GitHub code I manage to create this (very slow but working) demo

As usual in this scenario, now it’s time to improve the performance and start testing with some tweaks to get a decent up and running App.

And of course, the code

Happy coding!

Greetings @ Toronto

El Bruno

Resources

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (5/N)

Hi !

I’ll start with my posts

  1. Detecting Faces with 20 lines in Python
  2. Face Recognition with 20 lines in Python
  3. Detecting Facial Features with 20 lines in Python
  4. Facial Features and Face Recognition with 20 lines in Python

And after yesterday’s post I realize that the code is working, but there is room for performance improvement. So, I went back to my 1st sample, the one for face detection and I added some code to get some times for Frames per Second (FPS).

In my initial code, the app was working processing almost 6 FPS. Then I started to read the code and think on improvements and I manage to get an amazing +30FPS.

So, before moving forward, I want to remark this StackOverflow post that quickly pointed me in the easiest way to do a StopWatch in Python.

My original code, was this one:

And then, I realize that I may use some of the OpenCV functions to increase the face detection process. I really don’t need to process a full HD image (1920 x 1080), I may resize the frame to a quarter size and work with this. That’s how, based on some of the samples, I got the following code:

The line 12 perform the initial resize and then I recalculate back the positions before drawing the face frame. This process works almost 6 times faster than the original one.

I’ll continue improving the code and samples, and sharing my learning path !

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (4/N)

Hi !

Quick post today. I’ll pickup yesterday demo, showing the facial features and adding Face Recognition on top of that. In other words, we’ll move from this

To this

With a couple of extra lines for face recognition

There is some room for performance improvement, so I’ll focus on this in next posts.

The complete project is available here https://github.com/elbruno/Blog/tree/master/20190528%20Python%20FaceRecognition

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

My Posts

  1. Detecting Faces with 20 lines in Python
  2. Face Recognition with 20 lines in Python
  3. Detecting Facial Features with 20 lines in Python

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (3/N)

Hi !

In my previous posts I explained how to detect faces and perform face recognition in python. Today I’ll explore another feature in the face_recognition package: Find Facial Features.

The live camera output will be something like this:

Note: Special thanks to my daughter who is always OK to help me with this.

The main sample in the source code uses a photo to detect facial features and creates a new one with the features detected. In the following sample, is amazing to check that it detect a far away face behind the main ones and also, somehow, it detect some landmarks behind my girl glasses:

I wanted to see how fast this library work to perform this with a live camera feed, and the results are very good.

I spend sometime figuring out the best way to draw lines with OpenCV, at the end the PolyLine() function is the one doing all the magic (lines 14 to 17). It took me sometime, to find the best way to deal with matrix transformations and some other performance tricks, but at the end I get this up and running in 25 lines which is kind of amazing. And the final code is very simple:

The complete project is available here https://github.com/elbruno/Blog/tree/master/20190528%20Python%20FaceRecognition

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

My Posts

  1. Detecting Faces with 20 lines in Python
  2. Face Recognition with 20 lines in Python

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (2/N)

Hi !

Yesterday I explained how to write a couple of lines in Python to perform live face detection in a webcam feed [Post]. Check the resources section to find more about the tools I’m using.

Today, I’ll add some more code to perform face recognition. And as usual, I’ll work with my kids to test this out. I’ll start adding 2 face encodings for Valentino and myself. The code is simple enough, and I use a simple 300×300 head-shot photo to train and get the face encoding.

The previous function returns an set of arrays with the face encodings and the face names. In the complete file, I’ll use this to analyze the camera frame (line 31) and later to check the matches for faces (lines 34 * 36)

Last lines are cosmetic to mostly draw the frames for the detected faces, and show the names.

The complete project is available here https://github.com/elbruno/Blog/tree/master/20190521%20Python%20FaceRecognition

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

My Posts

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python

Hi !

I’ve write a lot about how to use AI models in C# to perform tasks like Face recognition, speech analysis, and more. During the Chicago CodeCamp, someone ask me about how to perform Face Recognition in Python. I didn’t have any working sample to showcase this, and I failed in try to write a 2 min app. So I added this into my ToDo list.

For this demo I’ll use Anaconda as the base Python distribution and Visual Studio Code as the code editor. There are several packages to perform face detection in Python. I’ll use a mix between OpenCV and Adam Geitgey Face Recognition package to use the camera and detect and recognize faces.

I’ll start by installing some packages to use in python app: dlib, openCV and face_recognition

"C:/Program Files (x86)/Microsoft Visual Studio/Shared/Anaconda3_86/python.exe" -m pip install dlib --user  

"C:/Program Files (x86)/Microsoft Visual Studio/Shared/Anaconda3_86/python.exe" -m pip install face_recognition --user

"C:/Program Files (x86)/Microsoft Visual Studio/Shared/Anaconda3_86/python.exe" -m pip install opencv-python --user  

And, the first step will be to detect faces and draw frames around them. All of this in 20 lines of code

When we run the app, we will see the camera feed and frames around the detected faces. In my next post I’ll add some extra code to perform face recognition.

Happy Coding!

Greetings @ Toronto

El Bruno

Resources

#MLNET – How to use the AutoML API in a Console App

Hi !

In my last posts I was testing AutoML using the Model Builder inside Visual Studio and also the CLI commands. There is also an API to use this in a .Net app, and the usage is very simple.

It all start, of course, adding the [Microsoft.ML.AutoML] nuget package

I read the documentation in [How to use the ML.NET automated machine learning API], and I created the following sample using the same data as in my previous posts.

The final result displays the results for each one of the tests and showcase the top 3 ranked models. This time LightGBM Trainer is one more time the best trainer to choose.

There is a full set of samples in the Machine Learning .Net Samples repository. I’ve reused some classes from the Common folder.

The complete source code is available https://github.com/elbruno/Blog/tree/master/20190516%20MLNET%20AutoML%20API

Happy Coding!

Greetings @ Toronto

El Bruno

References

#Windows10 – Windows #VisionSkills sample UWP App

Hi!

Yesterday the Windows Team announced the preview version of Windows Vision Skills. So today I was browsing the samples in Github and I’ve created a simplified version of the Skeleton tracker using a live feed from a webcam.

Here are some notes about my GitHub sample

  • The UWP App must be Windows 10 version 1809
  • I added the NuGet packages [Microsoft.AI.Skills.Vision.SkeletalDetectorPreview] and [Microsoft.Toolkit.Uwp.UI.Controls]
  • The MainView uses the CameraPreview control from the [Microsoft.Toolkit.Uwp.UI.Controls] toolkit.
  • Each frame is processed and I use a SkeletalBinding to detect Skeletons / bodies
  • The core detection is performed here
        private async Task RunSkillAsync(VideoFrame frame, bool isStream)
        {
            m_evalPerfStopwatch.Restart();

            // Update input image and run the skill against it
            await m_skeletalDetectorBinding.SetInputImageAsync(frame);
            await m_skeletalDetectorSkill.EvaluateAsync(m_skeletalDetectorBinding);

            m_evalPerfStopwatch.Stop();
            m_skeletalDetectionRunTime = m_evalPerfStopwatch.ElapsedMilliseconds;

            await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
            {
                m_bodyRenderer.Update(m_skeletalDetectorBinding.Bodies, !isStream);
                m_bodyRenderer.IsVisible = true;
                UISkillOutputDetails.Text = $"Found {m_skeletalDetectorBinding.Bodies.Count} bodies (took {m_skeletalDetectionRunTime} ms)";
            });
        }
  • There is also a BodyRenderer.cs class used to draw the skeletons on top of the CameraPreview Image control. It draws lines in an empty canvas.

You can download the sample code from here https://github.com/elbruno/Blog/tree/master/20190501%20VisionSkills%20Skeleton%20Sample

Greetings @ Burlington

El Bruno

References