In my last post I share some lines of code which allowed me to run some of the face recognition demos 6 times faster. I added a Frames per Second (FPS) feature in my samples. Later, thinking about performance, I realize that I don’t need to work with a full HD picture (1920 x 1080), so I added some code to resize the photo before the face detection process.
However, while I was coding arond this solution I also realized that I may want to initialize my camera to start in a lower resolution. So, I searched online on how to do this with OpenCV and I found 3 beautiful lines of code.
So, I manage to improve my processing code from 20FPS to +30FPS … which is very good ! Later on this posts I’ll try to do some similar FPS tests on a smaller device and I’ll see and share how this works.
And after yesterday’s post I realize that the code is working, but there is room for performance improvement. So, I went back to my 1st sample, the one for face detection and I added some code to get some times for Frames per Second (FPS).
In my initial code, the app was working processing almost 6 FPS. Then I started to read the code and think on improvements and I manage to get an amazing +30FPS.
So, before moving forward, I want to remark this StackOverflow post that quickly pointed me in the easiest way to do a StopWatch in Python.
My original code, was this one:
And then, I realize that I may use some of the OpenCV functions to increase the face detection process. I really don’t need to process a full HD image (1920 x 1080), I may resize the frame to a quarter size and work with this. That’s how, based on some of the samples, I got the following code:
The line 12 perform the initial resize and then I recalculate back the positions before drawing the face frame. This process works almost 6 times faster than the original one.
I’ll continue improving the code and samples, and sharing my learning path !
In my previous posts I explained how to detect faces and perform face recognition in python. Today I’ll explore another feature in the face_recognition package: Find Facial Features.
The live camera output will be something like this:
Note: Special thanks to my daughter who is always OK to help me with this.
The main sample in the source code uses a photo to detect facial features and creates a new one with the features detected. In the following sample, is amazing to check that it detect a far away face behind the main ones and also, somehow, it detect some landmarks behind my girl glasses:
I wanted to see how fast this library work to perform this with a live camera feed, and the results are very good.
I spend sometime figuring out the best way to draw lines with OpenCV, at the end the PolyLine() function is the one doing all the magic (lines 14 to 17). It took me sometime, to find the best way to deal with matrix transformations and some other performance tricks, but at the end I get this up and running in 25 lines which is kind of amazing. And the final code is very simple:
Yesterday I explained how to write a couple of lines in Python to perform live face detection in a webcam feed [Post]. Check the resources section to find more about the tools I’m using.
Today, I’ll add some more code to perform face recognition. And as usual, I’ll work with my kids to test this out. I’ll start adding 2 face encodings for Valentino and myself. The code is simple enough, and I use a simple 300×300 head-shot photo to train and get the face encoding.
The previous function returns an set of arrays with the face encodings and the face names. In the complete file, I’ll use this to analyze the camera frame (line 31) and later to check the matches for faces (lines 34 * 36)
Last lines are cosmetic to mostly draw the frames for the detected faces, and show the names.
I’ve write a lot about how to use AI models in C# to perform tasks like Face recognition, speech analysis, and more. During the Chicago CodeCamp, someone ask me about how to perform Face Recognition in Python. I didn’t have any working sample to showcase this, and I failed in try to write a 2 min app. So I added this into my ToDo list.
For this demo I’ll use Anaconda as the base Python distribution and Visual Studio Code as the code editor. There are several packages to perform face detection in Python. I’ll use a mix between OpenCV and Adam Geitgey Face Recognition package to use the camera and detect and recognize faces.
I’ll start by installing some packages to use in python app: dlib, openCV and face_recognition
I’ve been using Python and Jupyter notebooks more and more. And somehow, during this learning path I also realize that I can use Visual Studio Code to code amazing Python apps, and also to edit and work with Jupyter notebooks.
If you are VSCode python developer, you may know some of the features available in the tool. I won’t describe them, because you may find the official documentation very useful (see below links or references).
The Python extension provides many features for editing Python source code
in Visual Studio Code:
However, during the part months I’ve also working a lot using Jupyter notebooks, and I was very happy when I realize that VSCode also have some cool features to work with notebooks. The core of the notebooks are cells, and we can use them with the prefix #%%.
This is how it looks inside the IDE, running a cell in the code
feature is to run notebooks in a remote Jupyter server, maybe using Azure Notebooks.
I haven’t tried this one, and it’s on my ToDo list for the near future.
On top of adding cells features into standard python [.py] files, we can also edit standard Jupyter files. I’ve installed jupyter into one of my anaconda local environments, and now I can edit files inside VSCode.
First, I’ll be prompted to import the file as a standard python file
Now I got my Jupiter notebook inside VSCode
The final step will be to export my file or debug session, and for this we have the command [Python: Export …]
Después de una noche genial con los amigos de Metro Toronto UG, llega el momento de compartir los materiales que utilice durante la sesión. La idea inicial era hablar un poco de Azure Notebooks, y de alguna manera terminamos hablando también de Cognitive Services y Custom Vision, fue genial!
Para comenzar, los 15 min con el video de la Keynote:
Y algunos de los links que utilicé durante la sesión
After an amazing event with my friends from Metro Toronto UG, it’s time to share some resources. It was initially supposed to be focused only on Azure Notebooks, but somehow we spend a lot of time talking about Cognitive Services and Custom Vision, that was great!