#Event – Let’s code a drone ✈ to follow faces 😀 with #AzureIoT (x2!) Supporting the #GlobalAI Bootcamp Singapore and Germany!

Buy Me A Coffee

Hi !

Let’s start the 2021 with some drones events supporting 2 Global AI Bootcamp events

Please each event location for specific details on time and streaming options.

Let’s code a drone to follow faces syncing everything with Azure IoT

You can control a drone using 20 lines of code. That’s the easy part. However, adding extra features like face or object detection and program the drone to follow and object or a face requires … another 20 lines of code! During this workshop we will review how to connect to a drone, how to send and receive commands from the drone, how to read the camera video feed and how to apply AI on top of the camera feed to recognize objects or faces. We will use a simple house drone ($100) and Python. And, when we review some enterprise scenarios, we will use Azure IoT to sync the drone information in IoT mode. Let’s build this!

For this session I’ll explain how to create a SDK from zero to control a DJI Tello drone, and the last past will include some examples on how to connect the drone reported information to Azure IoT.

And this is probably the last one my Microsoft MVP room-office (see below), there are some changes in the near future, so let’ rock these 2 sessions !

Happy coding!

Greetings

El Bruno

¿Con ganas de ponerte al día?

En Lemoncode te ofrecemos formación online impartida por profesionales que se baten el cobre en consultoría:

#Event – Resources used during the session “Hack a drone, hack the camera and use AI” at the Global AI Tour, Lahore Pakistan, 2020

Buy Me A Coffee

Hi !

I had a great time early in the day with my Microsoft Student Partners from Lahore Pakistan, for the Global AI Tour. As usual, time for slides and code:

Slides

Code

https://github.com/elbruno/events/tree/main/2020%2011%2030%20Global%20AI%20Tour%20Pakistan

Resources

Happy coding!

Greetings

El Bruno

#Event – Resources used during the session “Getting started with Machine Learning.Net and AutoML” at the MVP Days Israel 2020

Buy Me A Coffee

Hi !

I had a great time early in the day (04:00AM!) with the MVP Days Israel team talking about how to start with Machine Learning .Net. As usual, time for slides and code:

Slides

Code

https://github.com/elbruno/events/tree/main/20201119%20MVP%20Day%20Israel%20MLNet

Resources

Recording

Coming soon !

Happy coding!

Greetings

El Bruno

#Event – Resources used during the session “Let’s code a drone 🛸 to follow faces with #dotnet”@dotnetdocsshow

Buy Me A Coffee

Hi !

I had a great time yesterday with the @dotnetdocsshow team talking about how to program a drone. And this time, with a special flavor:

Lets program the drone using dotnet core !

It was an amazing experience, and a really fun moment. Scott, David, Cam and Cecil are great hosts, and they make it very easy to move on from topic to topic.

Slides

Code

https://github.com/elbruno/events/tree/main/2020%2011%2009%20DotNetDocs%20Dev%20Drone%20AI

Resources

Recording

And, I’ll spend sometime in the near future sharing some lessons learned working with OpenCV, sockets and DotNetCore!

Happy coding!

Greetings

El Bruno

#Event – Let’s hack a drone, hack the camera and use AI! virtual with Microsoft Reactor @MSFTReactor

Buy Me A Coffee

Hi !

Today I’ll share a new version of my drone and Artificial Intelligence session, hosted by Microsoft Reactor Toronto.

The registration link and details are available here

Happy coding!

Greetings

El Bruno

#Coding4Fun – How to control your #drone with 20 lines of code! (18/N)

Buy Me A Coffee

Hi !

Today I’ll step back a couple of posts, and add 2 simple lines to allow me to save a video file from the Drone camera. This is a request, and it’s makes a lot of sense to have recorded a file with the drone camera.

The video will later contains detected objects and more, so let’s go with the code. All the magic happens here:

  • Lines 97-103. Open the drone camera stream, and also opens a video output stream to save the video file.
  • Lines 123-124. Display the camera feed and add the camera frame into the output video file.
  • Lines 136-139. Dispose objects, and close the video output file.
# Bruno Capuano
# enable drone video camera
# display video camera using OpenCV
# display FPS
import socket
import time
import threading
import cv2
def receiveData():
global response
while True:
try:
response, _ = clientSocket.recvfrom(1024)
except:
break
def readStates():
global battery
while True:
try:
response_state, _ = stateSocket.recvfrom(256)
if response_state != 'ok':
response_state = response_state.decode('ASCII')
list = response_state.replace(';', ':').split(':')
battery = int(list[21])
except:
break
def sendCommand(command):
global response
timestamp = int(time.time() * 1000)
clientSocket.sendto(command.encode('utf-8'), address)
while response is None:
if (time.time() * 1000) timestamp > 5 * 1000:
return False
return response
def sendReadCommand(command):
response = sendCommand(command)
try:
response = str(response)
except:
pass
return response
def sendControlCommand(command):
response = None
for i in range(0, 5):
response = sendCommand(command)
if response == 'OK' or response == 'ok':
return True
return False
# ———————————————–
# Main program
# ———————————————–
# connection info
UDP_IP = '192.168.10.1'
UDP_PORT = 8889
last_received_command = time.time()
STATE_UDP_PORT = 8890
address = (UDP_IP, UDP_PORT)
response = None
response_state = None
clientSocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
clientSocket.bind(('', UDP_PORT))
stateSocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
stateSocket.bind(('', STATE_UDP_PORT))
# start threads
recThread = threading.Thread(target=receiveData)
recThread.daemon = True
recThread.start()
stateThread = threading.Thread(target=readStates)
stateThread.daemon = True
stateThread.start()
# connect to drone
response = sendControlCommand("command")
print(f'command response: {response}')
response = sendControlCommand("streamon")
print(f'streamon response: {response}')
# drone information
battery = 0
# open UDP
print(f'opening UDP video feed, wait 2 seconds ')
videoUDP = 'udp://192.168.10.1:11111'
cap = cv2.VideoCapture(videoUDP)
time.sleep(2)
vid_cod = cv2.VideoWriter_fourcc(*'XVID')
vid_output = cv2.VideoWriter("videos/cam_video.mp4", vid_cod, 20.0, (640,480))
# open
i = 0
while True:
i = i + 1
start_time = time.time()
sendReadCommand('battery?')
print(f'battery: {battery} % – i: {i}')
try:
ret, frame = cap.read()
img = cv2.resize(frame, (640, 480))
if (time.time() start_time ) > 0:
fpsInfo = "FPS: " + str(1.0 / (time.time() start_time)) # FPS = 1 / time to process loop
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(img, fpsInfo, (10, 20), font, 0.4, (255, 255, 255), 1)
cv2.imshow('@elbruno – DJI Tello Camera', img)
vid_output.write(img)
except Exception as e:
print(f'exc: {e}')
pass
if cv2.waitKey(1) & 0xFF == ord('q'):
break
response = sendControlCommand("streamoff")
print(f'streamon response: {response}')
# close the already opened camera, and the video file
cap.release()
vid_output.release()
cv2.destroyAllWindows()

Happy coding!

Greetings

El Bruno

#Coding4Fun – How to control your #drone with 20 lines of code! (17/N)

Buy Me A Coffee

Hi !

Once we have the a custom vision trained model instance, we can use it to recognize objects from the drone camera feed. Read my previous posts for descriptions on these.

Another interesting scenario, is to save local files for every detected object. In the following code, I’ll save 2 different files for every detected object

  • A camera frame image, with a frame around the detected object
  • A plain text file with the JSON information

In the sample code below, the save process is in the lines 122-129. And, not in a fancy way, the files have the same name to correlate them.

drone recognized files

So let’s go to the full code:

# Bruno Capuano
# open camera with openCV
# analyze camera frame with local docker custom vision project
# draw bounding boxes for each reconized object
import socket
import time
import threading
import cv2
import urllib
import json
import requests
import os
from flask import Flask, request, jsonify
def receiveData():
global response
while True:
try:
response, _ = clientSocket.recvfrom(1024)
except:
break
def readStates():
global battery
while True:
try:
response_state, _ = stateSocket.recvfrom(256)
if response_state != 'ok':
response_state = response_state.decode('ASCII')
list = response_state.replace(';', ':').split(':')
battery = int(list[21])
pitch = int(list[1])
except:
break
def sendCommand(command):
global response
timestamp = int(time.time() * 1000)
clientSocket.sendto(command.encode('utf-8'), address)
while response is None:
if (time.time() * 1000) timestamp > 5 * 1000:
return False
return response
def sendReadCommand(command):
response = sendCommand(command)
try:
response = str(response)
except:
pass
return response
def sendControlCommand(command):
response = None
for i in range(0, 5):
response = sendCommand(command)
if response == 'OK' or response == 'ok':
return True
return False
# ———————————————–
# Local calls
# ———————————————–
probabilityThreshold = 75
def displayPredictions(jsonPrediction, frame, frameImageFileName):
global camera_Width, camera_Heigth
jsonObj = json.loads(jsonPrediction)
preds = jsonObj['predictions']
sorted_preds = sorted(preds, key=lambda x: x['probability'], reverse=True)
strSortedPreds = ""
resultFound = False
if (sorted_preds):
# open img to save results
img = cv2.imread(frameImageFileName)
detected = False
for pred in sorted_preds:
# tag name and prob * 100
tagName = str(pred['tagName'])
probability = pred['probability'] * 100
# apply threshold
if (probability >= probabilityThreshold):
detected = True
bb = pred['boundingBox']
resize_factor = 100
height = int(bb['height'] * resize_factor)
left = int(bb['left'] * resize_factor)
top = int(bb['top'] * resize_factor)
width = int(bb['width'] * resize_factor)
print(f'height = {height} – left {left} – top {top} – width {width}')
# adjust to size
camera_Width,
height = int(height * camera_Heigth / 100)
left = int(left * camera_Width / 100)
top = int(top * camera_Heigth / 100)
width = int(width * camera_Width / 100)
print(f'Adjusted height = {height} – left {left} – top {top} – width {width}')
# draw bounding boxes
start_point = (top, left)
end_point = (top + height, left + width)
print(f'MVP – {probability}')
print(f'start point: {start_point} – end point: {end_point}')
color = (255, 0, 0)
thickness = 2
cv2.rectangle(img, start_point, end_point, color, thickness)
print(jsonPrediction)
# save the detected image
cv2.rectangle(img, start_point, end_point, color, thickness)
if (detected == True):
detImageFileName = frameImageFileName.replace('tmp', 'det')
cv2.imwrite(detImageFileName, img)
detJsonFileName = detImageFileName.replace('png', 'json')
save_text = open(detJsonFileName, 'w')
save_text.write(jsonStr)
save_text.close()
return strSortedPreds
# instantiate flask app and push a context
app = Flask(__name__)
# ———————————————–
# Main program
# ———————————————–
# connection info
UDP_IP = '192.168.10.1'
UDP_PORT = 8889
last_received_command = time.time()
STATE_UDP_PORT = 8890
address = (UDP_IP, UDP_PORT)
response = None
response_state = None
clientSocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
clientSocket.bind(('', UDP_PORT))
stateSocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
stateSocket.bind(('', STATE_UDP_PORT))
# start threads
recThread = threading.Thread(target=receiveData)
recThread.daemon = True
recThread.start()
stateThread = threading.Thread(target=readStates)
stateThread.daemon = True
stateThread.start()
# connect to drone
response = sendControlCommand("command")
print(f'command response: {response}')
response = sendControlCommand("streamon")
print(f'streamon response: {response}')
# drone information
battery = 0
pitch = 0
# open UDP
print(f'opening UDP video feed, wait 2 seconds ')
videoUDP = 'udp://192.168.10.1:11111'
cap = cv2.VideoCapture(videoUDP)
time.sleep(2)
camera_Width = 640
camera_Heigth = 480
# open
i = 0
while True:
i = i + 1
imgNumber = str(i).zfill(5)
start_time = time.time()
sendReadCommand('battery?')
print(f'battery: {battery} % – pitch: {pitch} – i: {imgNumber}')
try:
ret, frame = cap.read()
img = cv2.resize(frame, (camera_Width, camera_Heigth))
# save image to disk and open it
frameImageFileName = str(f'tmp\image{imgNumber}.png')
cv2.imwrite(frameImageFileName, img)
with open(frameImageFileName, 'rb') as f:
img_data = f.read()
# analyze file in local container
api_url = "http://127.0.0.1:8070/image"
r = requests.post(api_url, data=img_data)
with app.app_context():
jsonResults = jsonify(r.json())
jsonStr = jsonResults.get_data(as_text=True)
displayPredictions(jsonStr, frame, frameImageFileName)
fpsInfo = ""
if (time.time() start_time ) > 0:
fpsInfo = "FPS: " + str(1.0 / (time.time() start_time)) # FPS = 1 / time to process loop
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(img, fpsInfo, (10, 20), font, 0.4, (255, 255, 255), 1)
cv2.imshow('@elbruno – DJI Tello Camera', img)
except Exception as e:
print(f'exc: {e}')
pass
if cv2.waitKey(1) & 0xFF == ord('q'):
break
response = sendControlCommand("streamoff")
print(f'streamon response: {response}')

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (16/N)

Buy Me A Coffee

Hi !

In my previous post, I shared an example where I analyzed the camera feed using a Image Recognition model created using Custom Vision. Today I’ll expand the sample, and show in real time the detected MVPs logos with a frame in the drone camera feed.

Let’s take a look at the demo working in the following image.

drone camera image analysis using custom vision and drawing frames for detected objects

In the top of the image, we can see the app console log, with the information received for each analyzed frame. When an image is detected, we can see the tag, the probability and the bounding box coordinates.

A sample JSON return string start like this one:

{
  "created": "2020-04-08T17:22:02.179359",
  "id": "",
  "iteration": "",
  "predictions": [
    {
      "boundingBox": {
        "height": 0.1979116,
        "left": 0.3235259,
        "top": 0.05847502,
        "width": 0.20438321
      },
      "probability": 0.89171505,
      "tagId": 0,
      "tagName": "MVP"
    },
    {
      "boundingBox": {
        "height": 0.2091526,
        "left": 0.65271178,
        "top": 0.0433814,
        "width": 0.17669522
      },
      "probability": 0.70330358,
      "tagId": 0,
      "tagName": "MVP"
    },

In order to position the frames in the correct location, I need to make some math using the current camera and image size and the returned bounding box values for, height, left, top and width. Lines 87-110.

resize_factor = 100

height = int(bb['height'] * resize_factor)
left = int(bb['left'] * resize_factor)
top = int(bb['top'] * resize_factor)
width = int(bb['width'] * resize_factor)

# adjust to size
camera_Width, 
height = int(height * camera_Heigth / 100)
left = int(left * camera_Width / 100)
top = int(top * camera_Heigth / 100)
width = int(width * camera_Width / 100)

# draw bounding boxes
start_point = (top, left)                 
end_point = (top + height, left + width) 
color = (255, 0, 0) 
thickness = 2                
cv2.rectangle(img, start_point, end_point, color, thickness)            

So let’s go to the full code:

# Bruno Capuano
# open camera with openCV
# analyze camera frame with local docker custom vision project
# draw bounding boxes for each reconized object
import socket
import time
import threading
import cv2
import urllib
import json
import requests
import os
from flask import Flask, request, jsonify
def receiveData():
global response
while True:
try:
response, _ = clientSocket.recvfrom(1024)
except:
break
def readStates():
global battery
while True:
try:
response_state, _ = stateSocket.recvfrom(256)
if response_state != 'ok':
response_state = response_state.decode('ASCII')
list = response_state.replace(';', ':').split(':')
battery = int(list[21])
pitch = int(list[1])
except:
break
def sendCommand(command):
global response
timestamp = int(time.time() * 1000)
clientSocket.sendto(command.encode('utf-8'), address)
while response is None:
if (time.time() * 1000) timestamp > 5 * 1000:
return False
return response
def sendReadCommand(command):
response = sendCommand(command)
try:
response = str(response)
except:
pass
return response
def sendControlCommand(command):
response = None
for i in range(0, 5):
response = sendCommand(command)
if response == 'OK' or response == 'ok':
return True
return False
# ———————————————–
# Local calls
# ———————————————–
probabilityThreshold = 75
def displayPredictions(jsonPrediction, frame):
global camera_Width, camera_Heigth
jsonObj = json.loads(jsonPrediction)
preds = jsonObj['predictions']
sorted_preds = sorted(preds, key=lambda x: x['probability'], reverse=True)
strSortedPreds = ""
resultFound = False
if (sorted_preds):
for pred in sorted_preds:
# tag name and prob * 100
tagName = str(pred['tagName'])
probability = pred['probability'] * 100
# apply threshold
if (probability >= probabilityThreshold):
bb = pred['boundingBox']
resize_factor = 100
height = int(bb['height'] * resize_factor)
left = int(bb['left'] * resize_factor)
top = int(bb['top'] * resize_factor)
width = int(bb['width'] * resize_factor)
#print(f'height = {height} – left {left} – top {top} – width {width}')
# adjust to size
camera_Width,
height = int(height * camera_Heigth / 100)
left = int(left * camera_Width / 100)
top = int(top * camera_Heigth / 100)
width = int(width * camera_Width / 100)
#print(f'Adjusted height = {height} – left {left} – top {top} – width {width}')
# draw bounding boxes
start_point = (top, left)
end_point = (top + height, left + width)
color = (255, 0, 0)
thickness = 2
cv2.rectangle(img, start_point, end_point, color, thickness)
print(f'MVP – {probability}')
print(f'start point: {start_point} – end point: {end_point}')
print(jsonPrediction)
return strSortedPreds
# instantiate flask app and push a context
app = Flask(__name__)
# ———————————————–
# Main program
# ———————————————–
# connection info
UDP_IP = '192.168.10.1'
UDP_PORT = 8889
last_received_command = time.time()
STATE_UDP_PORT = 8890
address = (UDP_IP, UDP_PORT)
response = None
response_state = None
clientSocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
clientSocket.bind(('', UDP_PORT))
stateSocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
stateSocket.bind(('', STATE_UDP_PORT))
# start threads
recThread = threading.Thread(target=receiveData)
recThread.daemon = True
recThread.start()
stateThread = threading.Thread(target=readStates)
stateThread.daemon = True
stateThread.start()
# connect to drone
response = sendControlCommand("command")
print(f'command response: {response}')
response = sendControlCommand("streamon")
print(f'streamon response: {response}')
# drone information
battery = 0
pitch = 0
# open UDP
print(f'opening UDP video feed, wait 2 seconds ')
videoUDP = 'udp://192.168.10.1:11111'
cap = cv2.VideoCapture(videoUDP)
time.sleep(2)
camera_Width = 640
camera_Heigth = 480
# open
i = 0
while True:
i = i + 1
imgNumber = str(i).zfill(5)
start_time = time.time()
sendReadCommand('battery?')
print(f'battery: {battery} % – pitch: {pitch} – i: {imgNumber}')
try:
ret, frame = cap.read()
img = cv2.resize(frame, (camera_Width, camera_Heigth))
# save image to disk and open it
frameImageFileName = str(f'tmp\image{imgNumber}.png')
cv2.imwrite(frameImageFileName, img)
with open(frameImageFileName, 'rb') as f:
img_data = f.read()
# analyze file in local container
api_url = "http://127.0.0.1:8070/image"
r = requests.post(api_url, data=img_data)
with app.app_context():
jsonResults = jsonify(r.json())
jsonStr = jsonResults.get_data(as_text=True)
displayPredictions(jsonStr, frame)
fpsInfo = ""
if (time.time() start_time ) > 0:
fpsInfo = "FPS: " + str(1.0 / (time.time() start_time)) # FPS = 1 / time to process loop
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(img, fpsInfo, (10, 20), font, 0.4, (255, 255, 255), 1)
cv2.imshow('@elbruno – DJI Tello Camera', img)
except Exception as e:
print(f'exc: {e}')
pass
if cv2.waitKey(1) & 0xFF == ord('q'):
break
response = sendControlCommand("streamoff")
print(f'streamon response: {response}')

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (15/N)

Buy Me A Coffee

Hi !

Let’s use Custom Vision to analyze the images from our drone camera. In this scenario, I created a custom model to recognize MVP awards from my MVP wall. I know, that’s bragging, but I like it.

Disclaimer: There are plenty of documentation and tutorials about Custom Vision. I won’t go deep on the steps about how to create a model. See references.

For my next scenario, I would assume that

  • You have created a model in Custom Vision
  • You have published the Custom Vision model, and have a HTTP endpoint
  • Or the model is exported as a docker image, and it’s running in a docker container. And we have a HTTP endpoint.

The code is similar to the one we used before. OpenCV to hookup the camera, commands to take off and land. Let me remark a couple of important lines in this code:

  • There are a couple of new references, mostly used for the process of the JSON response from the Custom Vision model.
  • Lines 146-155. Get the frame from the drone camera and save a local file. Apply a specific format to the file name, for demo purposes.
  • Lines 157-163. Make a HTTP POST call to analyze the saved file. Convert the result to a JSON object (room for improvement here), and analyze the JSON response.
  • Lines 70-85. Analyzed the JSON response from the Custom Vision model. Sort the results by probability and filter the results using a threshold (75). Return a string with the detected object.
  • Lines 165-178. Calculate and display FPS and detected objects.

A sample JSON return string start like this one:

{
  "created": "2020-04-08T17:22:02.179359",
  "id": "",
  "iteration": "",
  "predictions": [
    {
      "boundingBox": {
        "height": 0.1979116,
        "left": 0.3235259,
        "top": 0.05847502,
        "width": 0.20438321
      },
      "probability": 0.89171505,
      "tagId": 0,
      "tagName": "MVP"
    },
    {
      "boundingBox": {
        "height": 0.2091526,
        "left": 0.65271178,
        "top": 0.0433814,
        "width": 0.17669522
      },
      "probability": 0.70330358,
      "tagId": 0,
      "tagName": "MVP"
    },

So let’s go to the full code:

# Bruno Capuano
# open camera with openCV
# analyze camera frame with local docker custom vision project
# display recognized objects in output log
import socket
import time
import threading
import cv2
import urllib
import json
import requests
import os
from flask import Flask, request, jsonify
def receiveData():
global response
while True:
try:
response, _ = clientSocket.recvfrom(1024)
except:
break
def readStates():
global battery
while True:
try:
response_state, _ = stateSocket.recvfrom(256)
if response_state != 'ok':
response_state = response_state.decode('ASCII')
list = response_state.replace(';', ':').split(':')
battery = int(list[21])
except:
break
def sendCommand(command):
global response
timestamp = int(time.time() * 1000)
clientSocket.sendto(command.encode('utf-8'), address)
while response is None:
if (time.time() * 1000) timestamp > 5 * 1000:
return False
return response
def sendReadCommand(command):
response = sendCommand(command)
try:
response = str(response)
except:
pass
return response
def sendControlCommand(command):
response = None
for i in range(0, 5):
response = sendCommand(command)
if response == 'OK' or response == 'ok':
return True
return False
# ———————————————–
# Local calls
# ———————————————–
probabilityThreshold = 50
def getPredictionsSorted(jsonPrediction):
jsonObj = json.loads(jsonPrediction)
preds = jsonObj['predictions']
sorted_preds = sorted(preds, key=lambda x: x['probability'], reverse=True)
strSortedPreds = ""
if (sorted_preds):
for pred in sorted_preds:
# tag name and prob * 100
tagName = str(pred['tagName'])
probability = pred['probability'] * 100
# apply threshold
if (probability >= probabilityThreshold):
strSortedPreds = strSortedPreds + tagName + ": " + str(probability) + "\n"
return strSortedPreds
# instantiate flask app and push a context
app = Flask(__name__)
# ———————————————–
# Main program
# ———————————————–
# connection info
UDP_IP = '192.168.10.1'
UDP_PORT = 8889
last_received_command = time.time()
STATE_UDP_PORT = 8890
address = (UDP_IP, UDP_PORT)
response = None
response_state = None
clientSocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
clientSocket.bind(('', UDP_PORT))
stateSocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
stateSocket.bind(('', STATE_UDP_PORT))
# start threads
recThread = threading.Thread(target=receiveData)
recThread.daemon = True
recThread.start()
stateThread = threading.Thread(target=readStates)
stateThread.daemon = True
stateThread.start()
# connect to drone
response = sendControlCommand("command")
print(f'command response: {response}')
response = sendControlCommand("streamon")
print(f'streamon response: {response}')
# drone information
battery = 0
# open UDP
print(f'opening UDP video feed, wait 2 seconds ')
videoUDP = 'udp://192.168.10.1:11111'
cap = cv2.VideoCapture(videoUDP)
time.sleep(2)
# open
i = 0
while True:
i = i + 1
start_time = time.time()
sendReadCommand('battery?')
print(f'battery: {battery} % – i: {i}')
try:
ret, frame = cap.read()
img = cv2.resize(frame, (640, 480))
# save image to disk and open it
imgNumber = str(i).zfill(5)
frameImageFileName = str(f'image{imgNumber}.png')
if os.path.exists(frameImageFileName):
os.remove(frameImageFileName)
cv2.imwrite(frameImageFileName, img)
with open(frameImageFileName, 'rb') as f:
img_data = f.read()
# analyze file in local container
api_url = "http://127.0.0.1:8070/image"
r = requests.post(api_url, data=img_data)
with app.app_context():
jsonResults = jsonify(r.json())
jsonStr = jsonResults.get_data(as_text=True)
predSorted = getPredictionsSorted(jsonStr)
fpsInfo = ""
if (time.time() start_time ) > 0:
fpsInfo = "FPS: " + str(1.0 / (time.time() start_time)) + "\n——————-\n" # FPS = 1 / time to process loop
# display FPS and Predictions, split text into lines, thanks OpenCV putText()
frameInfo = fpsInfo + predSorted
print(frameInfo)
j = 0
for j, line in enumerate(frameInfo.split('\n')):
print(f'{j}{line}')
cv2.putText(img, line, (10, 10 * j), cv2.FONT_HERSHEY_SIMPLEX, 0.4, (255, 255, 255), 1)
cv2.imshow('@elbruno – DJI Tello Camera', img)
except Exception as e:
print(f'exc: {e}')
pass
if cv2.waitKey(1) & 0xFF == ord('q'):
break
response = sendControlCommand("streamoff")
print(f'streamon response: {response}')

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (13/N)

Buy Me A Coffee

Hi!

No code today, just some tips about a specific network configuration to work with the drone.

In 2 days I’ll share a session about how to do some AI with a Drone, and I just realized that I’ll be live doing this session. And, going back to my post 4 in this series, I realized that

My PC is connected to the drone via Wifi.

That means, I can’t connect to the Internet to host the session line.

In normal times, I’ll buy a couple of adapters and I’ll have everything in wireless mode. However this days, a simple amazon request may take a couple of weeks.

So, I analyze my home network and I realized that I can reuse some of my Linksys Velop nodes in my office. At home, I have 3 nodes to cover all the house, and I decided to move one of the nodes at my office and use one of the Ethernet ports to connect a network cable to my PC. So, my final configuration is something like this (sorry no visio installed)

Thanks to this port, I can be connected to the internet and host the session; and also be connected to the drone and have some fun !

More information about the event here

GlobalAI Community Virtual Tour, April 8th. Let’s code a drone to follow faces! Using AI, Python, containers and more

See you in a couple of days!

Happy coding!

Greetings

El Bruno

References

My Posts