#Coding4Fun – How to control your #drone with 20 lines of code! (21/N)

Buy Me A Coffee

Hi !

In my post series I already wrote about how to detect faces. We can do this with a camera and OpenCV. However, a drone can also be moved on command, so let’s write some lines to detect a face, and calculate the orientation and distance of the detected face from the center camera of the camera.

In order to do this, 1st let’s draw a grid in the camera frame, and once a face is detected, let’s show the distance and orientation from the center.

face detected on camera and calculate position from center

Let’s start with a Grid. The idea is to create a 3×3 grid in the camera frame, and use the center cell as reference for the detected objects. The code to create a 3×3 grid is this one:

def displayGrid(frame):
    # Add a 3x3 Grid
    cv2.line(frame, (int(camera_Width/2)-centerZone, 0)     , (int(camera_Width/2)-centerZone, camera_Heigth)    , lineColor, lineThickness)
    cv2.line(frame, (int(camera_Width/2)+centerZone, 0)     , (int(camera_Width/2)+centerZone, camera_Heigth)    , lineColor, lineThickness)
    cv2.line(frame, (0, int(camera_Heigth / 2) - centerZone), (camera_Width, int(camera_Heigth / 2) - centerZone), lineColor, lineThickness)
    cv2.line(frame, (0, int(camera_Heigth / 2) + centerZone), (camera_Width, int(camera_Heigth / 2) + centerZone), lineColor, lineThickness)

# Camera Settings
camera_Width  = 1024 # 1280 # 640
camera_Heigth = 780  # 960  # 480
centerZone    = 100

# GridLine color green and thickness
lineColor = (0, 255, 0) 
lineThickness = 2

We use the line() function on OpenCV, and do some calculations to get the starting and endpoint for the 4 lines for the grid: 2 vertical lines and 2 horizontal lines. For this demo, I’ll implement this in my main webcam.

drone 3x3 grid in the camera frame

Based on my face detection samples and other samples in GitHub (see references), now I’ll calculate the position of the detected face (with x, y, h, w) from the center of the camera:

def calculatePositionForDetectedFace(frame, x, y, h , w):
    # calculate direction and relative position of the face
    cx = int(x + (w / 2))  # Center X of the Face
    cy = int(y + (h / 2))  # Center Y of the Face

    if (cx <int(camera_Width/2) - centerZone):
        cv2.putText  (frame, " LEFT " , (20, 50), cv2.FONT_HERSHEY_COMPLEX, 1 , colorGreen, 2)
        dir = 1
    elif (cx > int(camera_Width / 2) + centerZone):
        cv2.putText(frame, " RIGHT ", (20, 50), cv2.FONT_HERSHEY_COMPLEX,1,colorGreen, 3)
        dir = 2
    elif (cy < int(camera_Heigth / 2) - centerZone):
        cv2.putText(frame, " UP ", (20, 50), cv2.FONT_HERSHEY_COMPLEX,1,colorGreen, 3)
        dir = 3
    elif (cy > int(camera_Heigth / 2) + centerZone):
        cv2.putText(frame, " DOWN ", (20, 50), cv2.FONT_HERSHEY_COMPLEX, 1,colorGreen, 3)
        dir = 4
    else: dir=0

    # display detected face frame, line from center and direction to go
    cv2.line     (frame, (int(camera_Width/2),int(camera_Heigth/2)), (cx,cy), colorRed, messageThickness)
    cv2.rectangle(frame, (x, y), (x + w, y + h), colorBlue, messageThickness)
    cv2.putText  (frame, str(int(x)) + " " + str(int(y)), (x - 20, y - 45), cv2.FONT_HERSHEY_COMPLEX,0.7, colorRed, messageThickness)

The output is similar to this one

And now with the base code completed, it’s time to add this logic to the drone samples !

Bonus: the complete code.

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (12/N)

Buy Me A Coffee

Hi!

Today code objective is very simple, based on a request I received from internet:

The drone is flying very happy, but if the camera detects a face, the drone will flip out !

Let’s take a look at the program working:

This one is very similar to the previous one. I also realized that I may need a better camera to record the live action side by side with the drone footage, but I think you get the idea. The command to make the drone flip is “flip x”, where “x” is the direction. In example:

"flip l" # flip left
"flip r" # flip right
"flip f" # flip forward
"flip b" # flip back

Here is the code:

As I promised last time, in next posts, I’ll analyze more in details how this works, and a couple of improvements that I can implement.

Happy coding!

Greetings

El Bruno

References

My Posts

#Coding4Fun – How to control your #drone with 20 lines of code! (10/N)

Buy Me A Coffee

Hi!

Back to some drone posts! I was kind of busy during the last weeks and now I can get back to write about the drone.

OK, in the last posts I described how to connect and work with the drone camera feed using OpenCV. Now with 2 extra lines of code we can also detect faces. Let’s take a look at the final sample.

drone camera and camera view performing face detection

In the previous image we can see 2 camera feeds. My computer webcam, where you can see how I hold the drone with the drone camera pointing to my face. And the drone camera feed, presented using OpenCV and drawing a frame over each detected face.

Let’s share some code insights:

  • As usual, I resize the camera feed to 320 x 240
  • The average processing time is between 40 and 70 FPS
  • I use a haar cascade classifier to detect the faces in each frame

Note: I need to write about Haar Cascades as part of my face detection post series.

In my next posts, I’ll add some drone specific behaviors for each face detected.

Happy coding!

Greetings

El Bruno

References

My Posts

#RaspberryPi – Performance differences in #FaceRecognition using #OpenVino (code with @code!)

Buy Me A Coffee

Hi !

I’ve been looking to use the amazing Intel Neural Stick 2 for a while, and one of the 1st ideas that I have was to check how fast my Raspberry Pi 4 can run using this device.

The Intel team released a nice step by step process installation for Raspberry Pi. And it works great, there are a couple of minor glitches that you need to figure out, like the latest package version, everything else works great.

Note: I downloaded my openvino toolkit from here (https://download.01.org/opencv/2019/openvinotoolkit/R3/), and the downloaded file is (l_openvino_toolkit_runtime_raspbian_p_2019.3.334.tgz).

Once installed, the 1st python sample is a face recognition one. This sample analyzes a image file using OpenCV to detect faces, and creates a new output file with the detected images. As I said, is very straight forward.

So, I decided to create a new python sample to run live face detection using the camera feed and also display the FPS. This is the output code:

The code is very straight forward and the main matters are

  • It uses 2 models from the Intel Zoo to perform the face detection: face-detection-adas-0001.xml and face-detection-adas-0001.bin
  • Lines 22 and 23 are key to define that OpenCV will load and use the models in the Intel device
  • I use imutils to resize the image to 640×480. Feel free to use any other library for this, even OpenCV
  • Also, it works also with smaller resolutions, however 640×480 is good for this demo

And the final app running analyzing almost 8 frames per second (8 FPS).

Which is almost 10 times faster that the 0.7 FPS without Intel NCS2

And, I already wrote about running Visual Studio Code in the Raspberry Pi (see references) is an amazing experience. I did all my Python in VSCode coding remote accesing my device via VNC. Python runs like a charm!

You can download the code from https://github.com/elbruno/rpiopenvino/tree/master/facedetection

References

My posts on Raspberry Pi

Dev posts for Raspberry Pi
Tools and Apps for Raspberry Pi
Setup the device
Hardware

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (5/N)

Buy Me A Coffee

Hi!

And after yesterday’s post I realize that the code is working, but there is room for performance improvement. So, I went back to my 1st sample, the one for face detection and I added some code to get some times for Frames per Second (FPS).

In my initial code, the app was working processing almost 6 FPS. Then I started to read the code and think on improvements and I manage to get an amazing +30FPS.

So, before moving forward, I want to remark this StackOverflow post that quickly pointed me in the easiest way to do a StopWatch in Python.

My original code, was this one:

And then, I realize that I may use some of the OpenCV functions to increase the face detection process. I really don’t need to process a full HD image (1920 x 1080), I may resize the frame to a quarter size and work with this. That’s how, based on some of the samples, I got the following code:

The line 12 perform the initial resize and then I recalculate back the positions before drawing the face frame. This process works almost 6 times faster than the original one.

I’ll continue improving the code and samples, and sharing my learning path !

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (4/N)

Buy Me A Coffee

Hi!

Quick post today. I’ll pickup yesterday demo, showing the facial features and adding Face Recognition on top of that. In other words, we’ll move from this

To this

With a couple of extra lines for face recognition

There is some room for performance improvement, so I’ll focus on this in next posts.

The complete project is available here https://github.com/elbruno/Blog/tree/master/20190528%20Python%20FaceRecognition

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

My Posts

  1. Detecting Faces with 20 lines in Python
  2. Face Recognition with 20 lines in Python
  3. Detecting Facial Features with 20 lines in Python

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (3/N)

Buy Me A Coffee

Hi!

In my previous posts I explained how to detect faces and perform face recognition in python. Today I’ll explore another feature in the face_recognition package: Find Facial Features.

The live camera output will be something like this:

Note: Special thanks to my daughter who is always OK to help me with this.

The main sample in the source code uses a photo to detect facial features and creates a new one with the features detected. In the following sample, is amazing to check that it detect a far away face behind the main ones and also, somehow, it detect some landmarks behind my girl glasses:

I wanted to see how fast this library work to perform this with a live camera feed, and the results are very good.

I spend sometime figuring out the best way to draw lines with OpenCV, at the end the PolyLine() function is the one doing all the magic (lines 14 to 17). It took me sometime, to find the best way to deal with matrix transformations and some other performance tricks, but at the end I get this up and running in 25 lines which is kind of amazing. And the final code is very simple:

The complete project is available here https://github.com/elbruno/Blog/tree/master/20190528%20Python%20FaceRecognition

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

My Posts

  1. Detecting Faces with 20 lines in Python
  2. Face Recognition with 20 lines in Python

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (2/N)

Buy Me A Coffee

Hi!

Yesterday I explained how to write a couple of lines in Python to perform live face detection in a webcam feed [Post]. Check the resources section to find more about the tools I’m using.

Today, I’ll add some more code to perform face recognition. And as usual, I’ll work with my kids to test this out. I’ll start adding 2 face encodings for Valentino and myself. The code is simple enough, and I use a simple 300×300 head-shot photo to train and get the face encoding.

The previous function returns an set of arrays with the face encodings and the face names. In the complete file, I’ll use this to analyze the camera frame (line 31) and later to check the matches for faces (lines 34 * 36)

Last lines are cosmetic to mostly draw the frames for the detected faces, and show the names.

The complete project is available here https://github.com/elbruno/Blog/tree/master/20190521%20Python%20FaceRecognition

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

My Posts

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python

Buy Me A Coffee

Hi !

I’ve write a lot about how to use AI models in C# to perform tasks like Face recognition, speech analysis, and more. During the Chicago CodeCamp, someone ask me about how to perform Face Recognition in Python. I didn’t have any working sample to showcase this, and I failed in try to write a 2 min app. So I added this into my ToDo list.

For this demo I’ll use Anaconda as the base Python distribution and Visual Studio Code as the code editor. There are several packages to perform face detection in Python. I’ll use a mix between OpenCV and Adam Geitgey Face Recognition package to use the camera and detect and recognize faces.

I’ll start by installing some packages to use in python app: dlib, openCV and face_recognition

"C:/Program Files (x86)/Microsoft Visual Studio/Shared/Anaconda3_86/python.exe" -m pip install dlib --user  

"C:/Program Files (x86)/Microsoft Visual Studio/Shared/Anaconda3_86/python.exe" -m pip install face_recognition --user

"C:/Program Files (x86)/Microsoft Visual Studio/Shared/Anaconda3_86/python.exe" -m pip install opencv-python --user  

And, the first step will be to detect faces and draw frames around them. All of this in 20 lines of code

When we run the app, we will see the camera feed and frames around the detected faces. In my next post I’ll add some extra code to perform face recognition.

Happy Coding!

Greetings @ Toronto

El Bruno

Resources

#ComputerVision – How to create a 3D model of a face using a 2D photo (Amazing !)

Hello!

The advances in Computer Vision are becoming more and more impressive. The suite I know best and with use more is Azure Cognitive services, however, there are surprises and advances that leave me with my mouth open.

This is the case of the work published by Aaron S. Jackson, Adrian encyclical, Vasileios Argyriou and Georgios Tzimiropoulos, where he explains how he can create a 3d model from a 2d photo. The best thing is to see it in action

2017 09 25 3D face from 2D 01.gif

I recommend you also see the video where they apply the algorithm in real-time to faces in a video.

Now is the time to try to explain, with my words of a 5-year-old boy, how this works. Behind this algorithm is a Convolutional Neural Network (CNN), which has been trained with 2D images with the results expected in 3D. The interesting thing about this model is that it has reached such a level of sophistication that it does not need a specific point of reference for a face, it works on any face.

With the 2D image information, it is possible to rebuild elements of the face, including parts that are not seen in the 2D image. In this way, and after much training CNN, achieve the results that can be seen in the live demo!

Maybe it’s better to hear this in their own words

Greetings @ Toronto

El Bruno

References