#Python – Let’s use a #FaceRecognition demo app for a performance comparison between #RaspberryPi3 and #RaspberryPi4

Hi!

I started to do some tests with the new Raspberry Pi 4 and the results are amazing. I’m not a performance expert, so I decided to pick up some of the demos / apps I’ve creating for the Raspberry Pi and run them in both models: Raspberry Pi 3 B+ and Raspberry Pi 4.

I started with an amazing set of tutorials on how to perform Face Recognition from Adrian Rosebrock (see references). I’ve been using his Face Recognition python package for this scenarios and it’s an amazing one.

I added some code to a custom version of Adrian’s Face Recognition sample, and it looks great. The main idea was to track in real-time the current FPS (similar to the work I did with the Image AI and Hololens sample a couple of days ago, see references).

This sample load a file with 15 trained faces and analyze frame by frame to

  • Detect faces in the frame.
  • If a face is detected, draw a frame around it.
  • For each detected frame analyze if the face is a trained face.
  • If the face is part of the trained dataset, the app will add the name of the person on top of the frame.

I display in real-time the FPS processed with a USB camera in a Raspberry Pi 3 B+. Doing a lot of tweaks and getting the best performance in the device I could never process 1FPS. The average processing data were between 0.6 and 0.9 FPS in a Raspberry Pi 3B+.

python face recognition in raspberry py 3 with FPS live

IMHO, these results are great for a small device like a Raspberry Pi 3B+. But now it was time to test it in the new Raspberry Pi 4. And an important note here is to remark that even if I did this tests in a Raspberry Pi 4 with 4GB of Rams, the performance results are similar to a RPI4 with just 1 GB of ram. We have more memory, however the processor improvements are quite significant in the new version.

I installed all the necessary software in the Raspberry Pi 4 and I got 3X better results. I’ve even tun this in a 1080p resolution to get a sense of the real processing time. The average processing data were between 2.3 and 2.4 FPS in a Raspberry Pi 4.

python face recognition in raspberry py 4 with FPS live

Amazing! In this scenario the Raspberry Pi 4 is almost 3 times faster than the Raspberry Pi 3. And again, these are amazing times for a 50USD device.

The sample source code is https://github.com/elbruno/Blog/tree/master/20190819%20Rpi%203%20vs%20Rpi%204%20Face%20Recognition

I even have time for some BBQ time with family and friends!

Happy coding!

Greetings @ Toronto

El Bruno

References

Advertisements

#Python – Detecting #Hololens in realtime in webcam feed using #ImageAI and #OpenCV with performance improvements

Hi!

In my previous post I created a sample on how to use ImageAI and OpenCV to detect Hololens from a webcam frame (see references). I added some code to the last sample, and I found that the performance was not very good.

python using imageai to detect hololens less than 1 fps

With the previous sample code, I couldn’t process more than 1 frame per second. So, I started to make some improvements and I got this result

python using imageai to detect hololens little more than 1 fps

Not an amazing one, but still is nice to have more than 1 frame per second analyzed.

I even remove all the camera preview and still works in less than 1FPS.

python using imageai to detect hololens no opencv camera preview

So, now it’s time to read and learn of the deep code on ImageAI. Fun times!

Happy coding!

Greetings @ Burlington

El Bruno

References

#Python –Detecting #Hololens in realtime in webcam feed using #ImageAI and #OpenCV (thanks to @OlafenwaMoses)

Hi!

Let’s start with a very quick intro:

During the past months, I’ve been playing around with several Image Analysis tools. And ImageAI (see references) is one that deserves a full series of posts. Please take a look at the product and the source code in GitHub, and also please thank the one behind this: Moses Olafenwa (@OlafenwaMoses).

And now, my 2 cents. I’ve started to test ImageAI to create my own image detection models. Most of the times, this is a hard path to do, however ImageAI show me an interesting option.

… with the latest release of ImageAI v2.1.0, support for training your custom YOLOv3 models to detect literally any kind and number of objects is now fully supported, …

Wow! That’s mean that I can pick up my own set of images dataset and train on top of a YOLOv3 and use it as a trained model. Again, this is amazing.

So, I started to read the article [Train Object Detection AI with 6 lines of code, see references] where Olafenwa explains how to do this using a data set with almost 500 rows with images for Hololens and Oculus Rift.

The code is very simple and easy to read. There are also examples on how to analyze a single file, or a video, or even a camera feed. The output for the analysis can be also in a new file, in a processed video or even a full log file with the detected information.

I started to read the code samples and I realized that I’m missing a scenario:

Display the realtime feed from a webcam, analyze each webcam frame and if a device is found, add a frame to the realtime feed to display this.

I use OpenCV to access to my camera, and it took me some time to figure out how to convert my OpenCV2 camera frame to the format needed by ImageAI. At the end, thanks to the GitHub code I manage to create this (very slow but working) demo

As usual in this scenario, now it’s time to improve the performance and start testing with some tweaks to get a decent up and running App.

And of course, the code

Happy coding!

Greetings @ Toronto

El Bruno

Resources

#RaspberryPi – Running a #python script in a Python Virtual Environment on reboot / startup

Hi!

Adrian Rosebrock is a very smart person who has tons of great resources about Computer Vision in PyImageSearch.com. Most of them are with Python, and he also have some of them focused on how to perform CV using OpenCV in a Raspberry Pi.

In the post [Running a Python + OpenCV script on reboot, see resources] he explains how to automatically run a Python script when a Raspberry Pi starts. He uses python virtual environments, so the first 2 commands are focused on to load the virtual env. Then, move to the app folder and run the python script.

source ~/.profile
workon cv
cd /home/pi/pi-face-recognition
python startup.py

Something like this:

python source and cv on raspberry pi.

The suggested approach consists on create a Schell Script [.sh file] with these lines and add them to the auto start. However, once you create the file and test it, there seems to be an issue with the Source command.

python source not working on SH file

Ok, so no source command in an SH file. I started to think on install all my python dependencies directly in the main user, however the idea of working with virtual environments is very useful for me. It was to read online about Linux, python and more.

Note: Before moving forward, I may need to add some context. I need to run my python script in a Terminal. My device will always auto-start with a 3.5 inches touch screen and a camera, so I need some GUI loaded.

This is an excellent article on how to add actions to the Raspberry Pi start-up [How to Execute a Script at Startup on the Raspberry Pi, see resources]. So I added my SH file here and it didn’t work and I need to figure out how to load a virtual environment and run a python script.

After a couple of tests, I realized that all the files I need are part of the virtual env location in the device.

raspberry pi python folder for virtual envs

So, I only need to add the full path to my command to make it work without the and [workon] command. My complete command will became:

/home/pi/.virtualenvs/cv/bin/python /home/pi/pi-face-recognition/startup.py

So, I edited my autostart file adding this command

raspberry pi auto start file launching python script with a virtual env

And done! My python script running on a python virtual environment on the device startup is working!

Happy coding!

Greetings @ Toronto

El Bruno

Resources

#Python – How to create a Virtual Environment in #Windows10 (Easy one, and brain backup!)

Hi!

Quick post today, and mostly a reminder on how to create a Virtual Environment in Python in Windows 10. I’ve been doing this mostly in my Mac and my Raspberry Pi, and I always forget how to do this on Windows, so … I’ll write this post to have this

Install Python3.

Download the installers from the official Python source (see references). I usually install it on the root of my C: hard drive and name it with the version. In example: c:\Python37_64 folder.

Remember to also add this folder and the Scripts folder to the Environment Variables.

Note: Once you start to install tools which uses Python, your OS becomes a nightmare. You will have your own installed python, the version installed with Visual Studio, the one with Visual Studio Code, another one for Anaconda.

I’m not sure if this is a best practice or not, but I usually remove all the other versions and keep mines in the root of the C: drive.

Ok, let’s go on.

For a new virtual environment named “testEnv”, open a command prompt and navigate to the python folder. Then run the command

python -m virtualenv testEnv

After a couple of seconds, the virtual Environment will be installed, and you can use it by run the [activate.bat] script. In this example

  • The virtual environment will be created at [c:\Python37_64\testEnv]
  • The virtual environment activate script will be at [c:\Python37_64\testEnv\Scripts\activate.bat]

So the full command sequence is similar to this one

Now you have your virtual environment up and running and you may want to start to add your own packages or tools. Like in example: Numpy or to list the installed packages

Finally, you may leave the virtual environment with the command

deactivate

Happy Coding!

Greetings @ Toronto

El Bruno

Resources

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (6/N)

Hi !

I’ll start with my posts

  1. Detecting Faces with 20 lines in Python
  2. Face Recognition with 20 lines in Python
  3. Detecting Facial Features with 20 lines in Python
  4. Facial Features and Face Recognition with 20 lines in Python
  5. Performance improvements with code

In my last post I share some lines of code which allowed me to run some of the face recognition demos 6 times faster. I added a Frames per Second (FPS) feature in my samples. Later, thinking about performance, I realize that I don’t need to work with a full HD picture (1920 x 1080), so I added some code to resize the photo before the face detection process.

However, while I was coding arond this solution I also realized that I may want to initialize my camera to start in a lower resolution. So, I searched online on how to do this with OpenCV and I found 3 beautiful lines of code.

open camera with opencv with lower resolution

So, I manage to improve my processing code from 20FPS to +30FPS … which is very good ! Later on this posts I’ll try to do some similar FPS tests on a smaller device and I’ll see and share how this works.

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (5/N)

Hi !

I’ll start with my posts

  1. Detecting Faces with 20 lines in Python
  2. Face Recognition with 20 lines in Python
  3. Detecting Facial Features with 20 lines in Python
  4. Facial Features and Face Recognition with 20 lines in Python

And after yesterday’s post I realize that the code is working, but there is room for performance improvement. So, I went back to my 1st sample, the one for face detection and I added some code to get some times for Frames per Second (FPS).

In my initial code, the app was working processing almost 6 FPS. Then I started to read the code and think on improvements and I manage to get an amazing +30FPS.

So, before moving forward, I want to remark this StackOverflow post that quickly pointed me in the easiest way to do a StopWatch in Python.

My original code, was this one:

And then, I realize that I may use some of the OpenCV functions to increase the face detection process. I really don’t need to process a full HD image (1920 x 1080), I may resize the frame to a quarter size and work with this. That’s how, based on some of the samples, I got the following code:

The line 12 perform the initial resize and then I recalculate back the positions before drawing the face frame. This process works almost 6 times faster than the original one.

I’ll continue improving the code and samples, and sharing my learning path !

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (4/N)

Hi !

Quick post today. I’ll pickup yesterday demo, showing the facial features and adding Face Recognition on top of that. In other words, we’ll move from this

To this

With a couple of extra lines for face recognition

There is some room for performance improvement, so I’ll focus on this in next posts.

The complete project is available here https://github.com/elbruno/Blog/tree/master/20190528%20Python%20FaceRecognition

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

My Posts

  1. Detecting Faces with 20 lines in Python
  2. Face Recognition with 20 lines in Python
  3. Detecting Facial Features with 20 lines in Python

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (3/N)

Hi !

In my previous posts I explained how to detect faces and perform face recognition in python. Today I’ll explore another feature in the face_recognition package: Find Facial Features.

The live camera output will be something like this:

Note: Special thanks to my daughter who is always OK to help me with this.

The main sample in the source code uses a photo to detect facial features and creates a new one with the features detected. In the following sample, is amazing to check that it detect a far away face behind the main ones and also, somehow, it detect some landmarks behind my girl glasses:

I wanted to see how fast this library work to perform this with a live camera feed, and the results are very good.

I spend sometime figuring out the best way to draw lines with OpenCV, at the end the PolyLine() function is the one doing all the magic (lines 14 to 17). It took me sometime, to find the best way to deal with matrix transformations and some other performance tricks, but at the end I get this up and running in 25 lines which is kind of amazing. And the final code is very simple:

The complete project is available here https://github.com/elbruno/Blog/tree/master/20190528%20Python%20FaceRecognition

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

My Posts

  1. Detecting Faces with 20 lines in Python
  2. Face Recognition with 20 lines in Python

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (2/N)

Hi !

Yesterday I explained how to write a couple of lines in Python to perform live face detection in a webcam feed [Post]. Check the resources section to find more about the tools I’m using.

Today, I’ll add some more code to perform face recognition. And as usual, I’ll work with my kids to test this out. I’ll start adding 2 face encodings for Valentino and myself. The code is simple enough, and I use a simple 300×300 head-shot photo to train and get the face encoding.

The previous function returns an set of arrays with the face encodings and the face names. In the complete file, I’ll use this to analyze the camera frame (line 31) and later to check the matches for faces (lines 34 * 36)

Last lines are cosmetic to mostly draw the frames for the detected faces, and show the names.

The complete project is available here https://github.com/elbruno/Blog/tree/master/20190521%20Python%20FaceRecognition

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

My Posts