After sharing a couple of sessions using the Drone and working with the drone camera, a couple of people contacted me with issues while they try to install OpenCV and/or TensorFlow, to be used with Python.
There are plenty of tutorials about this, so I will share a very easy one.
1st step is to install Anaconda (see references). Once Anaconda is installed, let’s launch the Anaconda PowerShell Prompt to install dependencies.
The 2 main commands to install the desired packages are
Today code objective is very simple, based on a request I received from internet:
The drone is flying very happy, but if the camera detects a face, the drone will flip out !
Let’s take a look at the program working:
This one is very similar to the previous one. I also realized that I may need a better camera to record the live action side by side with the drone footage, but I think you get the idea. The command to make the drone flip is “flip x”, where “x” is the direction. In example:
"flip l" # flip left
"flip r" # flip right
"flip f" # flip forward
"flip b" # flip back
Here is the code:
As I promised last time, in next posts, I’ll analyze more in details how this works, and a couple of improvements that I can implement.
The drone is flying very happy, but if the camera detects a banana, the drone must land !
Let’s take a look at the program working:
And a couple of notes regarding the app
Still use Haar Cascades for object detection. I found an article with a Xml file to detect bananas, so I’m working with this one (see references).
Using Haar Cascades is not the best technique for object detection. During the testing process, I found a lot of false positives. Mostly with small portions of the frame who were detected as bananas. One solution, was to limit the size of the detected objects using OpenCV (I’ll write more about this in the future)
As you can see in the animation, when the drone is a few meters away, the video feed becomes messy. And because the object detection is performed locally, it takes some time to detect the banana.
I also implemented some code to take off the drone when the user press the key ‘T’, and land the drone when the user press the key ‘L’
The code is starting to become a mess, so a refactoring is needed
Here is the code
In next posts, I’ll analyze more in details how this works, and a couple of improvements that I can implement.
I’m writing a series of posts about how to control a drone with Python and 20 lines of code, and once I reach to the point to read the camera feed, I’ve added a face detection sample. However this time I didn’t use the face_recognition python package I’ve used in this series, I performed the face detection using OpenCV and Haar Cascades. So, let’s explain a little what’s this.
Let me start quoting an amazing article “Face Detection using Haar Cascades” (see references)
Object Detection using Haar feature-based cascade classifiers is an effective object detection method proposed by Paul Viola and Michael Jones in their paper, “Rapid Object Detection using a Boosted Cascade of Simple Features” in 2001. It is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. It is then used to detect objects in other images.
OpenCV comes with a trainer as well as detector. If you want to train your own classifier for any object like car, planes etc. you can use OpenCV to create one. Its full details are given here: Cascade Classifier Training.
And here we come to the cool part, OpenCV already contains many pre-trained classifiers for face, eyes, smile etc. Those XML files are stored in opencv/data/haarcascades/ folder (see references).
Let’s take a look at a really [20 lines] sample code for face detection:
Line 6, we use OpenCV to load the haar cascade classifier to detect faces
Lines 9-20, main app
Lines 10-12, open a frame from the camera, transform the frame to a gray color scaled image and use the face cascade detector to find faces
Lines 14-15, iterate thought detected faces and draw a frame
Lines 17-20, display the webcam image with the detected faces, and stop the app when ESC key is pressed
And a live sample using a drone camera instead of an USB Camera
Bonus. Viola Jones Face Detection and tracking explained video
This is a long video, however is an amazing entry point to understand how the Viola Jones algorithm works.
Back to some drone posts! I was kind of busy during the last weeks and now I can get back to write about the drone.
OK, in the last posts I described how to connect and work with the drone camera feed using OpenCV. Now with 2 extra lines of code we can also detect faces. Let’s take a look at the final sample.
In the previous image we can see 2 camera feeds. My computer webcam, where you can see how I hold the drone with the drone camera pointing to my face. And the drone camera feed, presented using OpenCV and drawing a frame over each detected face.
Let’s share some code insights:
As usual, I resize the camera feed to 320 x 240
The average processing time is between 40 and 70 FPS
I use a haar cascade classifier to detect the faces in each frame
Note: I need to write about Haar Cascades as part of my face detection post series.
In my next posts, I’ll add some drone specific behaviors for each face detected.
Now that I started to understand how UDP works, I also did my research to find which are the best options to access an UDP video feed. Lucky for me, there are plenty of resources about doing this task using my old friend OpenCV.
Most of the OpenCV documentation is written in C++. However at the end, it all goes down to these basic lines of code
# open UDP
videoUDP = 'udp://192.168.10.1:11111'
cap = cv2.VideoCapture(videoUDP)
# read a frame from the feed
ret, frame = cap.read()
img = cv2.resize(frame, (320, 240))
# diplay the frame in openCV video window
Note: At the references section below I shared some of my posts with my experiences on how to install OpenCV in Windows 10.
Let’s go back to our sample Python App. Using the previous sample that display the battery level, I changed this code to be alive all the time and displaying the video feed in a 320*240 window.
After some posts about how to setup a Raspberry Pi, today I’ll share the steps I follow to install OpenCV.
Disclaimer: if you are looking for a detailed step by step on how to install or even build OpenCV in a Raspberry Pi, I strongly recommend to read the post “Install OpenCV on Raspberry Pi 4” by Adrian Rosebrock.
Ok, so let’s start. I assume that you read my posts and your Raspbian image is up and running.
Install Python 3 and Update device
1st step will be to install Python 3 with the following command
This process will take some minutes, so this is time 1 to get a coffee!
Install OpenCV and switch to right Raspberry Pi version!
And now the magic command to install OpenCV
sudo apt-get install libopencv-dev
And this process is the one who take most of the time, so coffee number 2. Take a look at all the dependencies for this
And after a couple of minutes the process is done. We can test the OpenCV version running 2 simple python commands. First let’s start python with the command
And then run the following lines
This should display the current OpenCV version.
However, with the latest version we have an error: ModuleNotFoundError: No module named ‘cv2’
The current installed version have some issues running in the raspberry py, so we need to make a downgrade to the version 188.8.131.52 with the command. We first uninstall the installed version (184.108.40.206) and install the specific version.
start to research the amazing world of Computer Vision, you find that there are
plenty of courses, tutorials, videos and more resources. Something is kind of “too
much”, and it’s not easy to choose where to start.
That’s why, when you arrive to one of the Adrian Rosebrock tutorials or articles, they’ll end in one of your favorites bookmarks. He has amazing detailed step by step tutorials, and I learned a lot of Raspberry Pi and OpenCV from his website.
A couple of
weeks ago, Adrian released an amazing resource for Computer Vision enthusiasts: