
Coding4Fun Drone 🚁 posts
- Introduction to DJI Tello
- Analyzing Python samples code from the official SDK
- Drone Hello World ! Takeoff and land
- Tips to connect to Drone WiFi in Windows 10
- Reading data from the Drone, Get battery level
- Sample for real time data read, Get Accelerometer data
- How the drone camera video feed works, using FFMPEG to display the feed
- Open the drone camera video feed using OpenCV
- Performance and OpenCV, measuring FPS
- Detect faces using the drone camera
- Detect a banana and land!
- Flip when a face is detected!
- How to connect to Internet and to the drone at the same time
- Video with real time demo using the drone, Python and Visual Studio Code
- Using custom vision to analyze drone camera images
- Drawing frames for detected objects in real-time in the drone camera feed
- Save detected objects to local files, images and JSON results
- Save the Drone camera feed into a local video file
- Overlay images into the Drone camera feed using OpenCV
- Instance Segmentation from the Drone Camera using OpenCV, TensorFlow and PixelLib
- Create a 3×3 grid on the camera frame to detect objects and calculate positions in the grid
- Create an Azure IoT Central Device Template to work with drone information
- Create a Drone Device for Azure IoT Central
- Send drone information to Azure IoT Central
Hi!
Today code objective is very simple:
The drone is flying very happy, but if the camera detects a banana, the drone must land !
Let’s take a look at the program working:

And a couple of notes regarding the app
- Still use Haar Cascades for object detection. I found an article with a Xml file to detect bananas, so I’m working with this one (see references).
- Using Haar Cascades is not the best technique for object detection. During the testing process, I found a lot of false positives. Mostly with small portions of the frame who were detected as bananas. One solution, was to limit the size of the detected objects using OpenCV (I’ll write more about this in the future)
- As you can see in the animation, when the drone is a few meters away, the video feed becomes messy. And because the object detection is performed locally, it takes some time to detect the banana.
- I also implemented some code to take off the drone when the user press the key ‘T’, and land the drone when the user press the key ‘L’
- The code is starting to become a mess, so a refactoring is needed
Here is the code
In next posts, I’ll analyze more in details how this works, and a couple of improvements that I can implement.
Happy coding!
Greetings
El Bruno
More posts in my blog ElBruno.com.
More info in https://beacons.ai/elbruno
References
My Posts