#Coding4Fun – How to control your #drone with 20 lines of code! (18/N)

Buy Me A Coffee

Hi !

Today I’ll step back a couple of posts, and add 2 simple lines to allow me to save a video file from the Drone camera. This is a request, and it’s makes a lot of sense to have recorded a file with the drone camera.

The video will later contains detected objects and more, so let’s go with the code. All the magic happens here:

  • Lines 97-103. Open the drone camera stream, and also opens a video output stream to save the video file.
  • Lines 123-124. Display the camera feed and add the camera frame into the output video file.
  • Lines 136-139. Dispose objects, and close the video output file.

Happy coding!

Greetings

El Bruno

#Coding4Fun – How to control your #drone with 20 lines of code! (17/N)

Buy Me A Coffee

Hi !

Once we have the a custom vision trained model instance, we can use it to recognize objects from the drone camera feed. Read my previous posts for descriptions on these.

Another interesting scenario, is to save local files for every detected object. In the following code, I’ll save 2 different files for every detected object

  • A camera frame image, with a frame around the detected object
  • A plain text file with the JSON information

In the sample code below, the save process is in the lines 122-129. And, not in a fancy way, the files have the same name to correlate them.

drone recognized files

So let’s go to the full code:

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (16/N)

Buy Me A Coffee

Hi !

In my previous post, I shared an example where I analyzed the camera feed using a Image Recognition model created using Custom Vision. Today I’ll expand the sample, and show in real time the detected MVPs logos with a frame in the drone camera feed.

Let’s take a look at the demo working in the following image.

drone camera image analysis using custom vision and drawing frames for detected objects

In the top of the image, we can see the app console log, with the information received for each analyzed frame. When an image is detected, we can see the tag, the probability and the bounding box coordinates.

A sample JSON return string start like this one:

{
  "created": "2020-04-08T17:22:02.179359",
  "id": "",
  "iteration": "",
  "predictions": [
    {
      "boundingBox": {
        "height": 0.1979116,
        "left": 0.3235259,
        "top": 0.05847502,
        "width": 0.20438321
      },
      "probability": 0.89171505,
      "tagId": 0,
      "tagName": "MVP"
    },
    {
      "boundingBox": {
        "height": 0.2091526,
        "left": 0.65271178,
        "top": 0.0433814,
        "width": 0.17669522
      },
      "probability": 0.70330358,
      "tagId": 0,
      "tagName": "MVP"
    },

In order to position the frames in the correct location, I need to make some math using the current camera and image size and the returned bounding box values for, height, left, top and width. Lines 87-110.

resize_factor = 100

height = int(bb['height'] * resize_factor)
left = int(bb['left'] * resize_factor)
top = int(bb['top'] * resize_factor)
width = int(bb['width'] * resize_factor)

# adjust to size
camera_Width, 
height = int(height * camera_Heigth / 100)
left = int(left * camera_Width / 100)
top = int(top * camera_Heigth / 100)
width = int(width * camera_Width / 100)

# draw bounding boxes
start_point = (top, left)                 
end_point = (top + height, left + width) 
color = (255, 0, 0) 
thickness = 2                
cv2.rectangle(img, start_point, end_point, color, thickness)            

So let’s go to the full code:

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (15/N)

Buy Me A Coffee

Hi !

Let’s use Custom Vision to analyze the images from our drone camera. In this scenario, I created a custom model to recognize MVP awards from my MVP wall. I know, that’s bragging, but I like it.

Disclaimer: There are plenty of documentation and tutorials about Custom Vision. I won’t go deep on the steps about how to create a model. See references.

For my next scenario, I would assume that

  • You have created a model in Custom Vision
  • You have published the Custom Vision model, and have a HTTP endpoint
  • Or the model is exported as a docker image, and it’s running in a docker container. And we have a HTTP endpoint.

The code is similar to the one we used before. OpenCV to hookup the camera, commands to take off and land. Let me remark a couple of important lines in this code:

  • There are a couple of new references, mostly used for the process of the JSON response from the Custom Vision model.
  • Lines 146-155. Get the frame from the drone camera and save a local file. Apply a specific format to the file name, for demo purposes.
  • Lines 157-163. Make a HTTP POST call to analyze the saved file. Convert the result to a JSON object (room for improvement here), and analyze the JSON response.
  • Lines 70-85. Analyzed the JSON response from the Custom Vision model. Sort the results by probability and filter the results using a threshold (75). Return a string with the detected object.
  • Lines 165-178. Calculate and display FPS and detected objects.

A sample JSON return string start like this one:

{
  "created": "2020-04-08T17:22:02.179359",
  "id": "",
  "iteration": "",
  "predictions": [
    {
      "boundingBox": {
        "height": 0.1979116,
        "left": 0.3235259,
        "top": 0.05847502,
        "width": 0.20438321
      },
      "probability": 0.89171505,
      "tagId": 0,
      "tagName": "MVP"
    },
    {
      "boundingBox": {
        "height": 0.2091526,
        "left": 0.65271178,
        "top": 0.0433814,
        "width": 0.17669522
      },
      "probability": 0.70330358,
      "tagId": 0,
      "tagName": "MVP"
    },

So let’s go to the full code:

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (13/N)

Buy Me A Coffee

Hi!

No code today, just some tips about a specific network configuration to work with the drone.

In 2 days I’ll share a session about how to do some AI with a Drone, and I just realized that I’ll be live doing this session. And, going back to my post 4 in this series, I realized that

My PC is connected to the drone via Wifi.

That means, I can’t connect to the Internet to host the session line.

In normal times, I’ll buy a couple of adapters and I’ll have everything in wireless mode. However this days, a simple amazon request may take a couple of weeks.

So, I analyze my home network and I realized that I can reuse some of my Linksys Velop nodes in my office. At home, I have 3 nodes to cover all the house, and I decided to move one of the nodes at my office and use one of the Ethernet ports to connect a network cable to my PC. So, my final configuration is something like this (sorry no visio installed)

Thanks to this port, I can be connected to the internet and host the session; and also be connected to the drone and have some fun !

More information about the event here

GlobalAI Community Virtual Tour, April 8th. Let’s code a drone to follow faces! Using AI, Python, containers and more

See you in a couple of days!

Happy coding!

Greetings

El Bruno

References

My Posts

#Coding4Fun – How to control your #drone with 20 lines of code! (12/N)

Buy Me A Coffee

Hi!

Today code objective is very simple, based on a request I received from internet:

The drone is flying very happy, but if the camera detects a face, the drone will flip out !

Let’s take a look at the program working:

This one is very similar to the previous one. I also realized that I may need a better camera to record the live action side by side with the drone footage, but I think you get the idea. The command to make the drone flip is “flip x”, where “x” is the direction. In example:

"flip l" # flip left
"flip r" # flip right
"flip f" # flip forward
"flip b" # flip back

Here is the code:

As I promised last time, in next posts, I’ll analyze more in details how this works, and a couple of improvements that I can implement.

Happy coding!

Greetings

El Bruno

References

My Posts

#Coding4Fun – How to control your #drone with 20 lines of code! (11/N)

Buy Me A Coffee

Hi!

Today code objective is very simple:

The drone is flying very happy, but if the camera detects a banana, the drone must land !

Let’s take a look at the program working:

drone flying and when detect a banana lands

And a couple of notes regarding the app

  • Still use Haar Cascades for object detection. I found an article with a Xml file to detect bananas, so I’m working with this one (see references).
  • Using Haar Cascades is not the best technique for object detection. During the testing process, I found a lot of false positives. Mostly with small portions of the frame who were detected as bananas. One solution, was to limit the size of the detected objects using OpenCV (I’ll write more about this in the future)
  • As you can see in the animation, when the drone is a few meters away, the video feed becomes messy. And because the object detection is performed locally, it takes some time to detect the banana.
  • I also implemented some code to take off the drone when the user press the key ‘T’, and land the drone when the user press the key ‘L’
  • The code is starting to become a mess, so a refactoring is needed

Here is the code

In next posts, I’ll analyze more in details how this works, and a couple of improvements that I can implement.

Happy coding!

Greetings

El Bruno

References

My Posts

#Coding4Fun – How to control your #drone with 20 lines of code! (10/N)

Buy Me A Coffee

Hi!

Back to some drone posts! I was kind of busy during the last weeks and now I can get back to write about the drone.

OK, in the last posts I described how to connect and work with the drone camera feed using OpenCV. Now with 2 extra lines of code we can also detect faces. Let’s take a look at the final sample.

drone camera and camera view performing face detection

In the previous image we can see 2 camera feeds. My computer webcam, where you can see how I hold the drone with the drone camera pointing to my face. And the drone camera feed, presented using OpenCV and drawing a frame over each detected face.

Let’s share some code insights:

  • As usual, I resize the camera feed to 320 x 240
  • The average processing time is between 40 and 70 FPS
  • I use a haar cascade classifier to detect the faces in each frame

Note: I need to write about Haar Cascades as part of my face detection post series.

In my next posts, I’ll add some drone specific behaviors for each face detected.

Happy coding!

Greetings

El Bruno

References

My Posts

#Coding4Fun – How to control your #drone with 20 lines of code! (9/N)

Buy Me A Coffee

Hi!

Let’s take some Frames Per Second measurements on the UDP and OpenCV connection. It seems that working with simple movements, the values moves between 30 and 60 FPS.

showing FPS information with the drone camera

Just added a couple of lines in the main While, to calculate the FPS.

# open
i = 0
while True:
    i = i + 1
    start_time = time.time()

    sendReadCommand('battery?')
    print(f'battery: {battery} % - i: {i}')

    try:
        ret, frame = cap.read()
        img = cv2.resize(frame, (640, 480))

        if (time.time() - start_time ) > 0:
            fpsInfo = "FPS: " + str(1.0 / (time.time() - start_time)) # FPS = 1 / time to process loop
            font = cv2.FONT_HERSHEY_DUPLEX
            cv2.putText(img, fpsInfo, (10, 20), font, 0.4, (255, 255, 255), 1)

        cv2.imshow('@elbruno - DJI Tello Camera', img)
    except Exception as e:
        print(f'exc: {e}')
        pass

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

As a final note, just need to mention that I make some tests using different camera resolutions and the FPS averages are similar. I tested with 640 * 480 pixels and 1024*768 pixels.

Next posts, let’s do some Face Detection and rock some AI with the drone!

Happy coding!

Greetings

El Bruno

References

My Posts

#Coding4Fun – How to control your #drone with 20 lines of code! (8/N)

Buy Me A Coffee

Hi!

Now that I started to understand how UDP works, I also did my research to find which are the best options to access an UDP video feed. Lucky for me, there are plenty of resources about doing this task using my old friend OpenCV.

Most of the OpenCV documentation is written in C++. However at the end, it all goes down to these basic lines of code

# open UDP
videoUDP = 'udp://192.168.10.1:11111'
cap = cv2.VideoCapture(videoUDP)

# read a frame from the feed
ret, frame = cap.read()
img = cv2.resize(frame, (320, 240))

# diplay the frame in openCV video window
cv2.imshow('Video', img)


Note: At the references section below I shared some of my posts with my experiences on how to install OpenCV in Windows 10.

Let’s go back to our sample Python App. Using the previous sample that display the battery level, I changed this code to be alive all the time and displaying the video feed in a 320*240 window.

The following video display how fast this works:

And of course the complete code with this notes:

  • Line 96-100. Open video feed and wait 2 seconds
  • Line 104-118. Main App.
    • Get and display battery level
    • Get UDP video frame
    • Resize frame to 320×240
    • Display frame
    • When Q key is pressed, exit app
  • Line 121. Close video stream

Happy coding!

Greetings

El Bruno

References

My Posts