
Coding4Fun Drone 🚁 posts
- Introduction to DJI Tello
- Analyzing Python samples code from the official SDK
- Drone Hello World ! Takeoff and land
- Tips to connect to Drone WiFi in Windows 10
- Reading data from the Drone, Get battery level
- Sample for real time data read, Get Accelerometer data
- How the drone camera video feed works, using FFMPEG to display the feed
- Open the drone camera video feed using OpenCV
- Performance and OpenCV, measuring FPS
- Detect faces using the drone camera
- Detect a banana and land!
- Flip when a face is detected!
- How to connect to Internet and to the drone at the same time
- Video with real time demo using the drone, Python and Visual Studio Code
- Using custom vision to analyze drone camera images
- Drawing frames for detected objects in real-time in the drone camera feed
- Save detected objects to local files, images and JSON results
- Save the Drone camera feed into a local video file
- Overlay images into the Drone camera feed using OpenCV
- Instance Segmentation from the Drone Camera using OpenCV, TensorFlow and PixelLib
- Create a 3×3 grid on the camera frame to detect objects and calculate positions in the grid
- Create an Azure IoT Central Device Template to work with drone information
- Create a Drone Device for Azure IoT Central
- Send drone information to Azure IoT Central
Hi !
Let’s use Custom Vision to analyze the images from our drone camera. In this scenario, I created a custom model to recognize MVP awards from my MVP wall. I know, that’s bragging, but I like it.

Disclaimer: There are plenty of documentation and tutorials about Custom Vision. I won’t go deep on the steps about how to create a model. See references.
For my next scenario, I would assume that
- You have created a model in Custom Vision
- You have published the Custom Vision model, and have a HTTP endpoint
- Or the model is exported as a docker image, and it’s running in a docker container. And we have a HTTP endpoint.
The code is similar to the one we used before. OpenCV to hookup the camera, commands to take off and land. Let me remark a couple of important lines in this code:
- There are a couple of new references, mostly used for the process of the JSON response from the Custom Vision model.
- Lines 146-155. Get the frame from the drone camera and save a local file. Apply a specific format to the file name, for demo purposes.
- Lines 157-163. Make a HTTP POST call to analyze the saved file. Convert the result to a JSON object (room for improvement here), and analyze the JSON response.
- Lines 70-85. Analyzed the JSON response from the Custom Vision model. Sort the results by probability and filter the results using a threshold (75). Return a string with the detected object.
- Lines 165-178. Calculate and display FPS and detected objects.
A sample JSON return string start like this one:
{
"created": "2020-04-08T17:22:02.179359",
"id": "",
"iteration": "",
"predictions": [
{
"boundingBox": {
"height": 0.1979116,
"left": 0.3235259,
"top": 0.05847502,
"width": 0.20438321
},
"probability": 0.89171505,
"tagId": 0,
"tagName": "MVP"
},
{
"boundingBox": {
"height": 0.2091526,
"left": 0.65271178,
"top": 0.0433814,
"width": 0.17669522
},
"probability": 0.70330358,
"tagId": 0,
"tagName": "MVP"
},
So let’s go to the full code:
And if you want to see this up and running, it’s much better to see this in a video (start at ):
The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex
Happy coding!
Greetings
El Bruno
More posts in my blog ElBruno.com.
More info in https://beacons.ai/elbruno