#Coding4Fun – How to control your #drone ✈ with 20 lines of code! (24/N) #AzureIoT

Buy Me A Coffee

Hi !

Final post, so let’s recap

We have a device template representing a drone, tracking accelerometer, battery and temperature values.

We have created a new device to twin our drone in Azure IoT Central. And, copy the connect information: ScopeId, DeviceId and Key.

Let’s use a “get data from drone demo” and publish this information to Azure IoT Central.

I’ll share the complete file below, however let’s take a look at the main app code.

  • The following code block iterates 300 times while is reading drone information.
  • In each iteration, it send to the drone devices the accelerometer values. These are drone telemetry capabilities.
  • Every 10 iterations, it also update other drone capabilities: battery and temperature. These are drone properties.
# MAIN APP
battery = 0
agx     = 0
agy     = 0
agz     = 0
temph   = 0
templ   = 0

i = 0
while i < 300:
    i = i + 1
    sendReadCommand('battery?')
    await drone.send_telemetry(agx, agy, agz)
    if (i % 10) == 0:
        await drone.send_properties(temph, templ, battery)
    time.sleep(1)

Once we run the app, Azure IoT Central dashboard will show the real-time information from the drone. If the drone is flying and reporting accelerometer values, the agx, agy, agz chart will display the values related to axis x, y and z.

drone azure iot central out of battery

Disclaimer: I declared the temperature unit as Celsius. You can quickly realize that the values are in Farenheight.

And as promised here is the full code for this demo.

# Copyright (c) 2020
# Author: Bruno Capuano
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ———————————————–
# 70 AZURE IOT HUB Sample
# ———————————————–
import socket
import time
import threading
import iotc
import provision_service
import asyncio
import json
import datetime
from drone_device import Drone_Device
# ———————————————–
# RECEIVE DATA FUNCTIONS
# ———————————————–
def receiveData():
global response, clientSocket
while True:
try:
response, _ = clientSocket.recvfrom(1024)
except:
break
def readStates():
global battery, agx, agy, agz, temph, templ
global response, response_state, clientSocket, stateSocket, address
while True:
try:
response_state, _ = stateSocket.recvfrom(256)
if response_state != 'ok':
response_state = response_state.decode('ASCII')
list = response_state.replace(';', ':').split(':')
battery = int(list[21])
agx = float(list[27])
agy = float(list[29])
agz = float(list[31])
temph = int(list[15])
templ = int(list[13])
# except:
# break
except Exception as e:
print(f'exc: {e}')
pass
# ———————————————–
# SEND COMMAND FUNCTIONS
# ———————————————–
def sendCommand(command):
global response, clientSocket, address
timestamp = int(time.time() * 1000)
clientSocket.sendto(command.encode('utf-8'), address)
while response is None:
if (time.time() * 1000) timestamp > 5 * 1000:
return False
return response
def sendReadCommand(command):
global response
response = sendCommand(command)
try:
response = str(response)
except:
pass
return response
def sendControlCommand(command):
global response, response_state, clientSocket, stateSocket, address
response = None
for i in range(0, 5):
response = sendCommand(command)
if response == 'OK' or response == 'ok':
return True
return False
# ———————————————–
# AZURE IOT CENTRAL
# ———————————————–
async def init_drone_AzureIoT():
global drone
iothub = ""
scope = "Azure IoT Central Device Connect Scope ID goes here"
device_id = "Azure IoT Central Device Connect Device ID goes here"
key = "Azure IoT Central Device Connect KEY goes here"
drone = Drone_Device(scope, device_id, key)
await drone.init_azureIoT()
# ———————————————–
# 35 CAMERA
# APP DISPLAY CAMERA WITH OPENCV and FPS
# ———————————————–
async def main():
global battery, agx, agy, agz, temph, templ
global response, response_state, clientSocket, stateSocket, address
global drone
await init_drone_AzureIoT()
# CONNECTION TO THE DRONE
# connection info
UDP_IP = '192.168.10.1'
UDP_PORT = 8889
last_received_command = time.time()
STATE_UDP_PORT = 8890
address = (UDP_IP, UDP_PORT)
response = None
response_state = None
clientSocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
clientSocket.bind(('', UDP_PORT))
stateSocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
stateSocket.bind(('', STATE_UDP_PORT))
# LISTENER THREADS
# start threads
recThread = threading.Thread(target=receiveData)
recThread.daemon = True
recThread.start()
stateThread = threading.Thread(target=readStates)
stateThread.daemon = True
stateThread.start()
# START DRONE CONNECTION
# connect to drone
response = sendControlCommand("command")
print(f'command response: {response}')
response = sendControlCommand("streamon")
print(f'streamon response: {response}')
# MAIN APP
# drone information
battery = 0
agx = 0
agy = 0
agz = 0
temph = 0
templ = 0
i = 0
while i < 300:
i = i + 1
sendReadCommand('battery?')
await drone.send_telemetry(agx, agy, agz)
if (i % 10) == 0:
await drone.send_properties(temph, templ, battery)
time.sleep(1)
if __name__ == "__main__":
asyncio.run(main())

In the next post we will connect everything together !

Happy coding!

Greetings

El Bruno


References


¿Con ganas de ponerte al día?

En Lemoncode te ofrecemos formación online impartida por profesionales que se baten el cobre en consultoría:

#Coding4Fun – How to control your #drone ✈ with 20 lines of code! (23/N) #AzureIoT

Buy Me A Coffee

Hi !

Now that we have a device template created, we can create a new device in our Azure IoT Central portal and start to send information to Azure IoT.

We start by creating a new device based on the previous template and it’s time to copy and paste the connection information for this device:

  • ID scope
  • Device ID
  • Primary Key
drone connect information

And, based on the official Python SDK documentation, I created this full class to send information to Azure IoT Central. A couple of notes

  • The class needs scopeID, deviceID and key in the constructor to create the device connection string
  • There are 2 main functions to showcase how to send telemetry and update properties
  • Telemetry (send_telemetry) uses the predefined device capabilities: agx, agy and agz
  • Properties (send_properties) uses the predefined device capabilities: battery, templ and temph
# Copyright (c) 2020
# Author: Bruno Capuano
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
import os
import datetime
import asyncio
import json
from azure.iot.device.aio import IoTHubDeviceClient
from azure.iot.device import MethodResponse
from azure.iot.device.aio import ProvisioningDeviceClient
class Drone_Device():
def __init__(self, scope, device_id, key, iothub : str = ""):
self.scope = scope
self.device_id = device_id
self.key = key
self.iothub = iothub
async def init_azureIoT(self):
cnn_str = await self.get_connection_string()
self.device_client = IoTHubDeviceClient.create_from_connection_string(cnn_str)
await self.device_client.connect()
async def __register_device(self):
provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
provisioning_host='global.azure-devices-provisioning.net',
registration_id=self.device_id,
id_scope=self.scope,
symmetric_key=self.key,
)
return await provisioning_device_client.register()
async def get_connection_string(self):
if(self.iothub == None or self.iothub == ""):
print(f'{datetime.datetime.now()}: No IOTHUB specified. Attempting to resolve via global.azure-devices-provisioning.net')
results = await asyncio.gather(self.__register_device())
print(results)
registration_result = results[0]
cnn_str = 'HostName=' + registration_result.registration_state.assigned_hub + \
';DeviceId=' + self.device_id + \
';SharedAccessKey=' + self.key
else:
cnn_str = 'HostName=' + self.iothub + \
';DeviceId=' + self.device_id + \
';SharedAccessKey=' + self.key
print(f'{datetime.datetime.now()}: Connection String = {cnn_str}')
return cnn_str
async def send_telemetry(self, agx, agy, agz):
try:
payload:str = ""
data = {
"agx": agx,
"agy": agy,
"agz": agz
}
payload = json.dumps(data)
print(f"{datetime.datetime.now()}: telemetry: {payload}")
await self.device_client.send_message(payload)
except Exception as e:
print(f"{datetime.datetime.now()}: Exception during sending metrics: {e}")
async def send_properties(self, bat, temph, templ):
try:
data = {
'bat': bat,
'templ': templ,
'temph': temph
}
propertiesToUpdate = data
print(f"{datetime.datetime.now()}: properties: {propertiesToUpdate}")
await self.device_client.patch_twin_reported_properties(propertiesToUpdate)
except Exception as e:
print(f"{datetime.datetime.now()}: Exception during sending metrics: {e}")

In the next post we will connect everything together !

Happy coding!

Greetings

El Bruno


References


¿Con ganas de ponerte al día?

En Lemoncode te ofrecemos formación online impartida por profesionales que se baten el cobre en consultoría:

#Coding4Fun – How to control your #drone ✈ with 20 lines of code! (22/N) #AzureIoT

Buy Me A Coffee

Hi !

In my the live events where I explain how to code to control the drone, I show a scenario where drone information is sent to Azure IoT Central. I just realized that I never wrote about it, so here is post 1 of N to describe this scenario.

Azure IoT Central

As usual let’s start with the official description for this Azure Service

IoT Central is an IoT application platform that reduces the burden and cost of developing, managing, and maintaining enterprise-grade IoT solutions. Choosing to build with IoT Central gives you the opportunity to focus time, money, and energy on transforming your business with IoT data, rather than just maintaining and updating a complex and continually evolving IoT infrastructure.

The web UI lets you monitor device conditions, create rules, and manage millions of devices and their data throughout their life cycle. Furthermore, it enables you to act on device insights by extending IoT intelligence into line-of-business applications.

What is Azure IoT Central ? see references

In the final output of this demo, we will have a device definition on Azure IoT central tracking

  • Realtime values for the drone accelerator (axis x, y, z)
  • Drone temperature (lowest and highest)
  • Battery charge, in %

Something similar to this 👇

drone azure iot central out of battery

Drone Template

My 1st step is to create a device template. I was testing different configurations, and at the end I finalize with the 4th iteration of my drone template: DroneTemplate v4

drone device template list

This template defines the following capabilities

  • Telemetry
    • agx
    • agy
    • agz
  • Properties
    • temph
    • templ
    • bat
drone template capabilities

Important: we can use custom data types, like Vector for accelerometer. However in this sample, all the properties and telemetry are defined as Integer values.

Now it’s time to create a view to display some of the device properties. In this scenario I added a chart with the telemetry agx, agy and agz. And 2 separate viewers for the battery and temperature.

In the temperature tile, I added a conditional formatting to display the battery value in different colors depending of the device battery charge.

And that’s it! With this simple device definition I can start to connect my test drone with the Azure IoT Central portal.

Happy coding!

Greetings

El Bruno


References


¿Con ganas de ponerte al día?

En Lemoncode te ofrecemos formación online impartida por profesionales que se baten el cobre en consultoría:

#Coding4Fun – RaspberryPi LED Christmas Tree 🎄 sync with Microsoft Teams (2/N) Azure AD, 🦒 and Apps

Buy Me A Coffee

Hi !

Back in the Microsoft Lync days, we had access to an SDK that allowed us to interact with the Messaging client in a local mode. As of today, there is not similar SDK to interact with Microsoft Teams. So, if I want to know the status of an user in Microsoft Teams, we need to call the Microsoft Graph. And this is not as simple as 2 lines in Microsoft Lync SDK.

Note: I’m fully supporting the Microsoft Graph 👇👇👇

Bruno and Nilesh Graph

Using the Graph is super easy. I mean, once you understand the entities and elements, is easy to make queries to get Graph information. The tricky part is to get permissions to call the Graph.

Isacc describes the process very well in the [Configuring an Azure Active Directory Application] section of his blog (see references). There are 2 main steps here

  • Create and add an application to an Azure AD
  • Grant permissions to the new App

Avanade is not as big as Microsoft, however just try to get permissions to create an App into our Azure AD to lab and test, is a colossal task. Tons of internal approvals, security checks, etc. Is not as easy as use a local SDK. That’s a boomer.

We have a Virtual Innovation Center to present and lab ideas, and I have full control there. So, I’m using this environment for labs, while I’m figuring out the best way to trigger this in Avanade.

Again, I need to learn more about this. The latest version of Presence Light does not longer requires Admin Consent, so this is no longer an issue for folks that want to get Presence. It didn’t work for me so, I created the app, and is up and running !

Again, please read Isaac post. He really explains the full process for this.

Raspberry Pi Christmas

Happy coding!

Greetings

El Bruno


References

#Coding4Fun – #RaspberryPi LED Christmas Tree 🎄 sync with #MicrosoftTeams (1/N) Why? Why not !

Buy Me A Coffee

Hi !

So I was preparing boxes with gadgets I have for our next house move, and I realized that I had a super cool Led based Christmas Tree, that can be used with a Raspberry Pi. The device can be programmed with Python, so I decided to go in my notes and I found something different to test the tree:

Sync the Tree Lights with my Microsoft Teams Status

Something like this !

After Build 2020, I have some notes from Scott Hanselman and Isaac Levin, which explains a way to start with this, please see references.

Note: Isaac Presence Light app does 90% of the work, kudos to him here!

So, I decided to start easy and create a simple Python Flask WebApp with the following end points

  • setcolor (color as parameter)
  • off
  • on
  • away
  • online
  • busy

Microsoft Teams supports much more states, but this ones are good enough for me to test the app.

The app is super simple, here goes the code

# Bruno Capuano
# simple webserver with flask
# change raspberry pi tree colours

from flask import Flask, request                                                
from tree import RGBXmasTree
from time import sleep
from colorzero import Color

def set_tree_color(new_color):
    global tree
    print(f'set color: {new_color}')
    tree.color = Color(new_color)
    return new_color

app = Flask(__name__)
tree = RGBXmasTree()

@app.route("/setcolor")
def set_color():
    # http://rpi8gb:8090/setcolor?color=red
    color = request.args.get("color")
    return set_tree_color(color)    

# Microsoft Team Status

@app.route("/off")
def set_off():
    global tree
    tree.off()
    return 'OK'

@app.route("/on")
def set_on():
    global tree
    tree.on()
    return 'OK'    

@app.route("/away")
def teams_away():
    return set_tree_color('yellow')

@app.route("/online")
def teams_online():
    return set_tree_color('green')

@app.route("/busy")
def teams_busy():
    return set_tree_color('red')

if __name__ == '__main__':
    # Run the server
    app.run(host='0.0.0.0', port=8090)

And, time for settings and config in the Presence Light App (next post more details about this). My device name is RPI8GB, so you can understand the Custom API Uris.

Presence Light config for Raspberry Pi Christmas Tree

And it was running !

More details and a better experience in future posts.

Happy coding!

Greetings

El Bruno


References

#Coding4Fun – How to control your #drone with 20 lines of code! (21/N)

Buy Me A Coffee

Hi !

In my post series I already wrote about how to detect faces. We can do this with a camera and OpenCV. However, a drone can also be moved on command, so let’s write some lines to detect a face, and calculate the orientation and distance of the detected face from the center camera of the camera.

In order to do this, 1st let’s draw a grid in the camera frame, and once a face is detected, let’s show the distance and orientation from the center.

face detected on camera and calculate position from center

Let’s start with a Grid. The idea is to create a 3×3 grid in the camera frame, and use the center cell as reference for the detected objects. The code to create a 3×3 grid is this one:

def displayGrid(frame):
    # Add a 3x3 Grid
    cv2.line(frame, (int(camera_Width/2)-centerZone, 0)     , (int(camera_Width/2)-centerZone, camera_Heigth)    , lineColor, lineThickness)
    cv2.line(frame, (int(camera_Width/2)+centerZone, 0)     , (int(camera_Width/2)+centerZone, camera_Heigth)    , lineColor, lineThickness)
    cv2.line(frame, (0, int(camera_Heigth / 2) - centerZone), (camera_Width, int(camera_Heigth / 2) - centerZone), lineColor, lineThickness)
    cv2.line(frame, (0, int(camera_Heigth / 2) + centerZone), (camera_Width, int(camera_Heigth / 2) + centerZone), lineColor, lineThickness)

# Camera Settings
camera_Width  = 1024 # 1280 # 640
camera_Heigth = 780  # 960  # 480
centerZone    = 100

# GridLine color green and thickness
lineColor = (0, 255, 0) 
lineThickness = 2

We use the line() function on OpenCV, and do some calculations to get the starting and endpoint for the 4 lines for the grid: 2 vertical lines and 2 horizontal lines. For this demo, I’ll implement this in my main webcam.

drone 3x3 grid in the camera frame

Based on my face detection samples and other samples in GitHub (see references), now I’ll calculate the position of the detected face (with x, y, h, w) from the center of the camera:

def calculatePositionForDetectedFace(frame, x, y, h , w):
    # calculate direction and relative position of the face
    cx = int(x + (w / 2))  # Center X of the Face
    cy = int(y + (h / 2))  # Center Y of the Face

    if (cx <int(camera_Width/2) - centerZone):
        cv2.putText  (frame, " LEFT " , (20, 50), cv2.FONT_HERSHEY_COMPLEX, 1 , colorGreen, 2)
        dir = 1
    elif (cx > int(camera_Width / 2) + centerZone):
        cv2.putText(frame, " RIGHT ", (20, 50), cv2.FONT_HERSHEY_COMPLEX,1,colorGreen, 3)
        dir = 2
    elif (cy < int(camera_Heigth / 2) - centerZone):
        cv2.putText(frame, " UP ", (20, 50), cv2.FONT_HERSHEY_COMPLEX,1,colorGreen, 3)
        dir = 3
    elif (cy > int(camera_Heigth / 2) + centerZone):
        cv2.putText(frame, " DOWN ", (20, 50), cv2.FONT_HERSHEY_COMPLEX, 1,colorGreen, 3)
        dir = 4
    else: dir=0

    # display detected face frame, line from center and direction to go
    cv2.line     (frame, (int(camera_Width/2),int(camera_Heigth/2)), (cx,cy), colorRed, messageThickness)
    cv2.rectangle(frame, (x, y), (x + w, y + h), colorBlue, messageThickness)
    cv2.putText  (frame, str(int(x)) + " " + str(int(y)), (x - 20, y - 45), cv2.FONT_HERSHEY_COMPLEX,0.7, colorRed, messageThickness)

The output is similar to this one

And now with the base code completed, it’s time to add this logic to the drone samples !

Bonus: the complete code.

# Bruno Capuano 2020
# display the camera feed using OpenCV
# display a 3×3 Grid
# detect faces using openCV and haar cascades
# calculate the relative position for the face from the center of the camera
import os
import time
import cv2
def displayGrid(frame):
# Add a 3×3 Grid
cv2.line(frame, (int(camera_Width/2)centerZone, 0) , (int(camera_Width/2)centerZone, camera_Heigth) , lineColor, lineThickness)
cv2.line(frame, (int(camera_Width/2)+centerZone, 0) , (int(camera_Width/2)+centerZone, camera_Heigth) , lineColor, lineThickness)
cv2.line(frame, (0, int(camera_Heigth / 2) centerZone), (camera_Width, int(camera_Heigth / 2) centerZone), lineColor, lineThickness)
cv2.line(frame, (0, int(camera_Heigth / 2) + centerZone), (camera_Width, int(camera_Heigth / 2) + centerZone), lineColor, lineThickness)
def calculatePositionForDetectedFace(frame, x, y, h , w):
# calculate direction and relative position of the face
cx = int(x + (w / 2)) # Center X of the Face
cy = int(y + (h / 2)) # Center Y of the Face
if (cx <int(camera_Width/2) centerZone):
cv2.putText (frame, " LEFT " , (20, 50), cv2.FONT_HERSHEY_COMPLEX, 1 , colorGreen, 2)
dir = 1
elif (cx > int(camera_Width / 2) + centerZone):
cv2.putText(frame, " RIGHT ", (20, 50), cv2.FONT_HERSHEY_COMPLEX,1,colorGreen, 3)
dir = 2
elif (cy < int(camera_Heigth / 2) centerZone):
cv2.putText(frame, " UP ", (20, 50), cv2.FONT_HERSHEY_COMPLEX,1,colorGreen, 3)
dir = 3
elif (cy > int(camera_Heigth / 2) + centerZone):
cv2.putText(frame, " DOWN ", (20, 50), cv2.FONT_HERSHEY_COMPLEX, 1,colorGreen, 3)
dir = 4
else: dir=0
# display detected face frame, line from center and direction to go
cv2.line (frame, (int(camera_Width/2),int(camera_Heigth/2)), (cx,cy), colorRed, messageThickness)
cv2.rectangle(frame, (x, y), (x + w, y + h), colorBlue, messageThickness)
cv2.putText (frame, str(int(x)) + " " + str(int(y)), (x 20, y 45), cv2.FONT_HERSHEY_COMPLEX,0.7, colorRed, messageThickness)
# Camera Settings
camera_Width = 1024 # 1280 # 640
camera_Heigth = 780 # 960 # 480
centerZone = 100
# GridLine color green and thickness
lineColor = (0, 255, 0)
lineThickness = 2
# message color and thickness
colorBlue = (255, 0, 0)
colorGreen = (0, 255, 0)
colorRed = (0, 0, 255) #red
messageThickness = 2
dsize = (camera_Width, camera_Heigth)
video_capture = cv2.VideoCapture(1)
time.sleep(2.0)
# enable face and smile detection
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
i = 0
while True:
i = i + 1
ret, frameOrig = video_capture.read()
frame = cv2.resize(frameOrig, dsize)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
displayGrid(frame)
# detect faces
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
# display face in grid
calculatePositionForDetectedFace(frame, x, y, h , w)
cv2.imshow('@ElBruno – Follow Faces', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_capture.release()
cv2.destroyAllWindows()

Happy coding!

Greetings

El Bruno


References

#Coding4Fun – How to control your #drone with 20 lines of code! (3/N)

Buy Me A Coffee

Hi!

Today I’ll write the equivalent of a Hello World in the drone ecosystem. This is a very complex app which:

  • take off the drone
  • wait a couple of seconds
  • lands the drone

I’ve followed the Python code sample from the SDK and the final code is very complex (see below). And it deserves some remarks

  • Line 11. The function recv() run in a separated thread to receive messages from the drone. The thread is started on line 44
  • Line 19. The function sendMessage() sends messages to the drone. Important, the messages must be UTF-8 encoded. This took me some time, until I figure it out. This time implements a timeout with 5 second until the response is processed in a separated thread
  • Lines 31-41. Connection information and sockets to communicate with the drone
  • Line 48. Main code for the app. Start the SDK mode, Wait 5 seconds, send the take off message, and then send the land message. A very simple exception catch is implemented here.

Important: When using the SDK, the 1st command should be “command”, as I did in line 49.

Here is the code:

# Bruno Capuano
# Simple drone demo
# connect to drone
# send take off, sleep and land
import threading
import socket
import sys
import time
def recv():
global response
while True:
try:
response, _ = clientSocket.recvfrom(1024)
except:
break
def sendMessage(command):
global response
timestamp = int(time.time() * 1000)
clientSocket.sendto(command.encode('utf-8'), address)
while response is None:
if (time.time() * 1000) timestamp > 5 * 1000:
return False
return response
# connection info
UDP_IP = '192.168.10.1'
UDP_PORT = 8889
last_received_command = time.time()
STATE_UDP_PORT = 8890
address = (UDP_IP, UDP_PORT)
response = None
clientSocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
clientSocket.bind(('', UDP_PORT))
# start threads
recThread = threading.Thread(target=recv)
recThread.daemon = True
recThread.start()
try:
msg = "command"
sendMessage(msg) # start SDK mode
time.sleep(5)
msg = "takeoff"
sendMessage(msg) # takeoff
time.sleep(3)
msg = "land"
sendMessage(msg) # land
except Exception as e:
print (f'\nError processing the message:{msg}\n{e}')
finally:
print ('\n And now Stop\n')
sock.close()

Here is the app running at 3X speed so you don’t spend all day watching boring drones videos:

Drone Hello World, take off and land

Happy coding!

Greetings

El Bruno


#Coding4Fun – Slap your boss away with #Skype and #LeapMotion (I’m getting ready for 2020!)

Buy Me A Coffee

Important: This repost is just to start one of my 2020 projects, which is very easy: write more fun stuff !

Hi !

During all my time working I was lucky enough to have some very cool bosses. So this is not personal at all, is just a funny way to discard a “incoming call” from someone.

The main idea is to use Leap Motion SDK and Skype for Business Desktop SDK (Lync 2013 SDK) to create a simple app which will allows us to ignore a call just doing a Swipe gesture.

image

Important: If you try to use Lync 2013 SDK in Windows 10 with Visual Studio 2015 or Visual Studio 15 Preview, you’ll find tons of problems. I’ll write a post on this later about the necessary steps to do this.

The source code is available in GitHub

Greetings @ Toronto

-El Bruno

References

#Coding4Fun – Goodbye to one of the funniest and useful Channel9 ‘s resources

images

Hi!

Coding4Fun’s blog has officially closed. I guess the best way to summarize what is Coding4fun is with a couple of facts

  • 1er article on April 2005. And it was written by a VIP guest: Scott Hanselman @shanselman and he was not even a MS employee during those days!
  • Over 1500 blog posts and almost 300 videos
  • Much more

Personally, I must comment that besides being mentioned several times, Coding4fun was one of the sources that most frequently reviewed. The blog allowed me to meet great authors (very smart people) And it was always a constant source of inspiration for new ideas.

Many of the best Kinect and IoT projects could be found there. The same applies to the missile launchers, the 1st Labs in Project Oxford (now known by Cognitive Services) and much more!

Greg Duncan (@gduncan411)

So, It is time to say THANKS to Greg Duncan (@gduncan411) who for more than 10 years has done a great job! Thanks!

Greetings @ Burlington

El Bruno

References

#Coding4Fun – Goodbye a uno de los recursos más divertidos de Channel9

images

Buenas!

El blog de Coding4Fun ha cerrado oficialmente. Supongo que la mejor forma de resumir lo que es Coding4fun es con un par de facts

En lo personal, debo comentar que además de ser mencionado varias veces, Coding4fun era una de las fuentes que más frecuentemente revisaba. El blog me permitio conocer a grandes autores y a gente muy inteligente, y siempre fue una fuente constante de inspiración para nuevas ideas.

Muchos de los mejores proyectos de Kinect se IoT se podían encontrar allí. Lo mismo aplica para los lanzamisiles, los 1ros Labs de Project Oxford (ahora conocido por Cognitive Services) y ¡mucho más!

Así que, es momento de darle las GRACIAS a Greg Duncan (@gduncan411) que durante mas de 10 años ha hecho una labor genial! Thanks!

Saludos @ Burlington

El Bruno

References