#Coding4Fun – How to control your #drone with 20 lines of code! (21/N)

Buy Me A Coffee

Hi !

In my post series I already wrote about how to detect faces. We can do this with a camera and OpenCV. However, a drone can also be moved on command, so let’s write some lines to detect a face, and calculate the orientation and distance of the detected face from the center camera of the camera.

In order to do this, 1st let’s draw a grid in the camera frame, and once a face is detected, let’s show the distance and orientation from the center.

face detected on camera and calculate position from center

Let’s start with a Grid. The idea is to create a 3×3 grid in the camera frame, and use the center cell as reference for the detected objects. The code to create a 3×3 grid is this one:

def displayGrid(frame):
    # Add a 3x3 Grid
    cv2.line(frame, (int(camera_Width/2)-centerZone, 0)     , (int(camera_Width/2)-centerZone, camera_Heigth)    , lineColor, lineThickness)
    cv2.line(frame, (int(camera_Width/2)+centerZone, 0)     , (int(camera_Width/2)+centerZone, camera_Heigth)    , lineColor, lineThickness)
    cv2.line(frame, (0, int(camera_Heigth / 2) - centerZone), (camera_Width, int(camera_Heigth / 2) - centerZone), lineColor, lineThickness)
    cv2.line(frame, (0, int(camera_Heigth / 2) + centerZone), (camera_Width, int(camera_Heigth / 2) + centerZone), lineColor, lineThickness)

# Camera Settings
camera_Width  = 1024 # 1280 # 640
camera_Heigth = 780  # 960  # 480
centerZone    = 100

# GridLine color green and thickness
lineColor = (0, 255, 0) 
lineThickness = 2

We use the line() function on OpenCV, and do some calculations to get the starting and endpoint for the 4 lines for the grid: 2 vertical lines and 2 horizontal lines. For this demo, I’ll implement this in my main webcam.

drone 3x3 grid in the camera frame

Based on my face detection samples and other samples in GitHub (see references), now I’ll calculate the position of the detected face (with x, y, h, w) from the center of the camera:

def calculatePositionForDetectedFace(frame, x, y, h , w):
    # calculate direction and relative position of the face
    cx = int(x + (w / 2))  # Center X of the Face
    cy = int(y + (h / 2))  # Center Y of the Face

    if (cx <int(camera_Width/2) - centerZone):
        cv2.putText  (frame, " LEFT " , (20, 50), cv2.FONT_HERSHEY_COMPLEX, 1 , colorGreen, 2)
        dir = 1
    elif (cx > int(camera_Width / 2) + centerZone):
        cv2.putText(frame, " RIGHT ", (20, 50), cv2.FONT_HERSHEY_COMPLEX,1,colorGreen, 3)
        dir = 2
    elif (cy < int(camera_Heigth / 2) - centerZone):
        cv2.putText(frame, " UP ", (20, 50), cv2.FONT_HERSHEY_COMPLEX,1,colorGreen, 3)
        dir = 3
    elif (cy > int(camera_Heigth / 2) + centerZone):
        cv2.putText(frame, " DOWN ", (20, 50), cv2.FONT_HERSHEY_COMPLEX, 1,colorGreen, 3)
        dir = 4
    else: dir=0

    # display detected face frame, line from center and direction to go
    cv2.line     (frame, (int(camera_Width/2),int(camera_Heigth/2)), (cx,cy), colorRed, messageThickness)
    cv2.rectangle(frame, (x, y), (x + w, y + h), colorBlue, messageThickness)
    cv2.putText  (frame, str(int(x)) + " " + str(int(y)), (x - 20, y - 45), cv2.FONT_HERSHEY_COMPLEX,0.7, colorRed, messageThickness)

The output is similar to this one

And now with the base code completed, it’s time to add this logic to the drone samples !

Bonus: the complete code.

# Bruno Capuano 2020
# display the camera feed using OpenCV
# display a 3×3 Grid
# detect faces using openCV and haar cascades
# calculate the relative position for the face from the center of the camera
import os
import time
import cv2
def displayGrid(frame):
# Add a 3×3 Grid
cv2.line(frame, (int(camera_Width/2)centerZone, 0) , (int(camera_Width/2)centerZone, camera_Heigth) , lineColor, lineThickness)
cv2.line(frame, (int(camera_Width/2)+centerZone, 0) , (int(camera_Width/2)+centerZone, camera_Heigth) , lineColor, lineThickness)
cv2.line(frame, (0, int(camera_Heigth / 2) centerZone), (camera_Width, int(camera_Heigth / 2) centerZone), lineColor, lineThickness)
cv2.line(frame, (0, int(camera_Heigth / 2) + centerZone), (camera_Width, int(camera_Heigth / 2) + centerZone), lineColor, lineThickness)
def calculatePositionForDetectedFace(frame, x, y, h , w):
# calculate direction and relative position of the face
cx = int(x + (w / 2)) # Center X of the Face
cy = int(y + (h / 2)) # Center Y of the Face
if (cx <int(camera_Width/2) centerZone):
cv2.putText (frame, " LEFT " , (20, 50), cv2.FONT_HERSHEY_COMPLEX, 1 , colorGreen, 2)
dir = 1
elif (cx > int(camera_Width / 2) + centerZone):
cv2.putText(frame, " RIGHT ", (20, 50), cv2.FONT_HERSHEY_COMPLEX,1,colorGreen, 3)
dir = 2
elif (cy < int(camera_Heigth / 2) centerZone):
cv2.putText(frame, " UP ", (20, 50), cv2.FONT_HERSHEY_COMPLEX,1,colorGreen, 3)
dir = 3
elif (cy > int(camera_Heigth / 2) + centerZone):
cv2.putText(frame, " DOWN ", (20, 50), cv2.FONT_HERSHEY_COMPLEX, 1,colorGreen, 3)
dir = 4
else: dir=0
# display detected face frame, line from center and direction to go
cv2.line (frame, (int(camera_Width/2),int(camera_Heigth/2)), (cx,cy), colorRed, messageThickness)
cv2.rectangle(frame, (x, y), (x + w, y + h), colorBlue, messageThickness)
cv2.putText (frame, str(int(x)) + " " + str(int(y)), (x 20, y 45), cv2.FONT_HERSHEY_COMPLEX,0.7, colorRed, messageThickness)
# Camera Settings
camera_Width = 1024 # 1280 # 640
camera_Heigth = 780 # 960 # 480
centerZone = 100
# GridLine color green and thickness
lineColor = (0, 255, 0)
lineThickness = 2
# message color and thickness
colorBlue = (255, 0, 0)
colorGreen = (0, 255, 0)
colorRed = (0, 0, 255) #red
messageThickness = 2
dsize = (camera_Width, camera_Heigth)
video_capture = cv2.VideoCapture(1)
time.sleep(2.0)
# enable face and smile detection
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
i = 0
while True:
i = i + 1
ret, frameOrig = video_capture.read()
frame = cv2.resize(frameOrig, dsize)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
displayGrid(frame)
# detect faces
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
# display face in grid
calculatePositionForDetectedFace(frame, x, y, h , w)
cv2.imshow('@ElBruno – Follow Faces', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_capture.release()
cv2.destroyAllWindows()

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (3/N)

Buy Me A Coffee

Hi!

Today I’ll write the equivalent of a Hello World in the drone ecosystem. This is a very complex app which:

  • take off the drone
  • wait a couple of seconds
  • lands the drone

I’ve followed the Python code sample from the SDK and the final code is very complex (see below). And it deserves some remarks

  • Line 11. The function recv() run in a separated thread to receive messages from the drone. The thread is started on line 44
  • Line 19. The function sendMessage() sends messages to the drone. Important, the messages must be UTF-8 encoded. This took me some time, until I figure it out. This time implements a timeout with 5 second until the response is processed in a separated thread
  • Lines 31-41. Connection information and sockets to communicate with the drone
  • Line 48. Main code for the app. Start the SDK mode, Wait 5 seconds, send the take off message, and then send the land message. A very simple exception catch is implemented here.

Important: When using the SDK, the 1st command should be “command”, as I did in line 49.

Here is the code:

# Bruno Capuano
# Simple drone demo
# connect to drone
# send take off, sleep and land
import threading
import socket
import sys
import time
def recv():
global response
while True:
try:
response, _ = clientSocket.recvfrom(1024)
except:
break
def sendMessage(command):
global response
timestamp = int(time.time() * 1000)
clientSocket.sendto(command.encode('utf-8'), address)
while response is None:
if (time.time() * 1000) timestamp > 5 * 1000:
return False
return response
# connection info
UDP_IP = '192.168.10.1'
UDP_PORT = 8889
last_received_command = time.time()
STATE_UDP_PORT = 8890
address = (UDP_IP, UDP_PORT)
response = None
clientSocket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
clientSocket.bind(('', UDP_PORT))
# start threads
recThread = threading.Thread(target=recv)
recThread.daemon = True
recThread.start()
try:
msg = "command"
sendMessage(msg) # start SDK mode
time.sleep(5)
msg = "takeoff"
sendMessage(msg) # takeoff
time.sleep(3)
msg = "land"
sendMessage(msg) # land
except Exception as e:
print (f'\nError processing the message:{msg}\n{e}')
finally:
print ('\n And now Stop\n')
sock.close()

Here is the app running at 3X speed so you don’t spend all day watching boring drones videos:

Drone Hello World, take off and land

Happy coding!

Greetings

El Bruno

#Coding4Fun – Slap your boss away with #Skype and #LeapMotion (I’m getting ready for 2020!)

Buy Me A Coffee

Important: This repost is just to start one of my 2020 projects, which is very easy: write more fun stuff !

Hi !

During all my time working I was lucky enough to have some very cool bosses. So this is not personal at all, is just a funny way to discard a “incoming call” from someone.

The main idea is to use Leap Motion SDK and Skype for Business Desktop SDK (Lync 2013 SDK) to create a simple app which will allows us to ignore a call just doing a Swipe gesture.

image

Important: If you try to use Lync 2013 SDK in Windows 10 with Visual Studio 2015 or Visual Studio 15 Preview, you’ll find tons of problems. I’ll write a post on this later about the necessary steps to do this.

The source code is available in GitHub

Greetings @ Toronto

-El Bruno

References

#Coding4Fun – Goodbye to one of the funniest and useful Channel9 ‘s resources

images

Hi!

Coding4Fun’s blog has officially closed. I guess the best way to summarize what is Coding4fun is with a couple of facts

  • 1er article on April 2005. And it was written by a VIP guest: Scott Hanselman @shanselman and he was not even a MS employee during those days!
  • Over 1500 blog posts and almost 300 videos
  • Much more

Personally, I must comment that besides being mentioned several times, Coding4fun was one of the sources that most frequently reviewed. The blog allowed me to meet great authors (very smart people) And it was always a constant source of inspiration for new ideas.

Many of the best Kinect and IoT projects could be found there. The same applies to the missile launchers, the 1st Labs in Project Oxford (now known by Cognitive Services) and much more!

Greg Duncan (@gduncan411)

So, It is time to say THANKS to Greg Duncan (@gduncan411) who for more than 10 years has done a great job! Thanks!

Greetings @ Burlington

El Bruno

References

#Coding4Fun – Goodbye a uno de los recursos más divertidos de Channel9

images

Buenas!

El blog de Coding4Fun ha cerrado oficialmente. Supongo que la mejor forma de resumir lo que es Coding4fun es con un par de facts

En lo personal, debo comentar que además de ser mencionado varias veces, Coding4fun era una de las fuentes que más frecuentemente revisaba. El blog me permitio conocer a grandes autores y a gente muy inteligente, y siempre fue una fuente constante de inspiración para nuevas ideas.

Muchos de los mejores proyectos de Kinect se IoT se podían encontrar allí. Lo mismo aplica para los lanzamisiles, los 1ros Labs de Project Oxford (ahora conocido por Cognitive Services) y ¡mucho más!

Así que, es momento de darle las GRACIAS a Greg Duncan (@gduncan411) que durante mas de 10 años ha hecho una labor genial! Thanks!

Saludos @ Burlington

El Bruno

References

#Coding4Fun – Cancela una llamada de #Skype con una bofetada con #LeapMotion

Hola !

Creo que he tenido mucha suerte en mis años trabajando, ya que mis jefes siempre  han sido buenas personas. Así que esto no es personal, ni nada parecido, solo es una forma divertida de cancelar una “incoming call” en Skype.

Para implementar esta idea he usado el Leap Motion SDK y Skype for Business Desktop SDK (Lync 2013 SDK) creando una app muy simple que nos permite ignorar una llamada con gesto Swipe, también conocido como bofetada.

image

Importante: Si estas pensando en utilizar Lync 2013 SDK en Windows 10 con Visual Studio 2015 o Visual Studio 15 Preview, te encontrarás muchos problemas de instalación. En estos días escribiré un post al respecto, con un paso a paso que nos permite tener un entorno de desarrollo.

El source code está disponible en GitHub

Saludos @ Toronto

-El Bruno

References

#Coding4Fun – Slap your boss away with #Skype and #LeapMotion

Hi !

During all my time working I was lucky enough to have some very cool bosses. So this is not personal at all, is just a funny way to discard a “incoming call” from someone.

The main idea is to use Leap Motion SDK and Skype for Business Desktop SDK (Lync 2013 SDK) to create a simple app which will allows us to ignore a call just doing a Swipe gesture.

image

Important: If you try to use Lync 2013 SDK in Windows 10 with Visual Studio 2015 or Visual Studio 15 Preview, you’ll find tons of problems. I’ll write a post on this later about the necessary steps to do this.

The source code is available in GitHub

Greetings @ Toronto

-El Bruno

References

#Coding4Fun – Cool robots para jugar con niños y aprender un poco de #programación

Hola !

Durante las últimas semanas algunos amigos me preguntaron si conocía software y algun kit de robótica para que jueguen y aprendan los niños. Había dejado un draft del post e modo borrador, pero hoy lo termino con 4 productos que, personalmente nos han divertido bastante.

Scratch (Free)

image

En muchos casos, la base de los productos que se ofrecen para robótica en DIY, se basa en Arduino. Si conoces un poco del “movimiento maker” seguramente sabes que esto es igual a: programar en C, conocer y conectar cables y componentes, etc. Este tipo de actividades son un NO WAY CON NIÑOS. Y es por eso que Scratch aparece como una opción interesante.

Scratch permite a los niños (y adultos) crear sus propios juegos, historias interactivas y animaciones utilizando un entorno de programación visual. Ha sido creado por Lifelong Kindergarten (LLK) Group en MIT Media Lab. Y es posible probar una opción en linea aquí.

 

Empecemos a ver algunos robots, el primero de Europa.

BQ Zowi (~ €110)

image
Zowi tiene un propósito: enseñar a los niños que la tecnología puede ser transparente, directa y divertida. Además de ser un juguete, es también un ecosistema educativo con un amplio camino pedagógico. Es más, es un robot open source: el diseño físico, el código y la aplicación de todos a disposición libremente para que cualquier persona puede entenderlo y modificarlo. Zowi utiliza una versión personalizada de Scratch para las tareas de programación.

Programming: Scratch

Soldering: No

Controller: Smartphone or Tablet

Makeblock mBot Educational Robot Kit for Kids (~ €200)

image

Esto es ideal para construir un robot personalizado, como un tanque robot o un coche de tres ruedas. El kit tiene todo que lo necesario para construir este tipo de vehículos y usa un Arduino que está completamente integrado en el entorno de programación Scratch. Y ahora llega el momento de hablar de mBlock: un editor de programación personalizado específicamente para productos Makeblock de Arduino. La mejor versión es la versión Bluetooth que puede ser controlada con un smartphone o tablet.

Programming: Scratch

Soldering: No

Controller: Smartphone or Tablet

 
OK, hasta aquí robots basados en Arduino, ahora veamos otros proyectos que son más amigables.

LittleBits (~ 99$ to ~200$)

image

LittleBits es un kit del tipo “plug and play”, donde cada pieza es un pequeño componente electrónico muy inteligente. Las piezas se conectan magnéticamente, de manera que podamos tener algo con luces, o un motor, etc. muy rápidamente.

La mejor forma de tener una idea de lo que es posible es ver la sección de proyectos. Es genial para niños pequeños y no requiere ninguna habilidad de programación.

Kano (~ 150$)

OK, esto es probablemente para niños más grandes, pero para mi sigue siendo genial:

Vamos a construir un ordenador, aunque lo haremos como si fuese un Lego.

Y es la mejor manera de compartir esto con un Video

 

Y eso es todo. Probablemente voy a hacer una actualización para agregar algunas otras herramientas como el Lego EV3, u otros bots.

Importante: Todos los precios son estimaciones y deben utilizarse sólo como referencia.

Saludos @ Toronto

-El Bruno

References

#Coding4Fun – Cool projects / robots to play with #kids and learn some #programming skills

Hi !

During the past weeks some friends asked me about some cool projects to play with robots with their kids. So I’ll share 4 nice products which would give you much fun with your kids. Remember kids today doesn’t want to only consume toys, they want to create and interact with their inventions with the world!

Scratch (Free)

image

The basis for most of the project is an Arduino board, and if you are into the maker “way of life”, you know what’s this means: C code, low level tasks, etc. Most of this is NO WAY FOR A KID. So here Scratch raises as an interesting option.

Scratch allows kids (and everyone) to create their own games, interactive stories, and animations using a visual programming environment. Scratch is made by the Lifelong Kindergarten (LLK) group at the MIT Media Lab. You can try an online version here.

 

Let’s move to the robots and start with a cool one from Europe.

BQ Zowi (~ €110)

image

Zowi has one purpose: to teach kids that technology can be transparent, proximate and fun. In addition to being a toy, it is also an educational ecosystem with an extensive pedagogical pathway. What’s more, it is an open source robot: the physical design, the code and the app have all been made freely available so that anyone can understand it and modify it. Zowi uses a custom version of Scratch for the programming tasks.

Programming: Scratch

Soldering: No

Controller: Smartphone or Tablet

Makeblock mBot Educational Robot Kit for Kids (~ €200)

image

This is great to build a custom robot, like a robot tank or a three-wheeled car. The kit has everything you need to build your vehicle and use the Arduino based controller that is fully integrated into the Scratch programming environment. Now is time to introduce mBlock, an Arduino programming editor customized specifically for Makeblock products. The best version is the Bluetooth version that can be controlled with a smartphone or tablet.

Programming: Scratch

Soldering: No

Controller: Smartphone or Tablet

 

Ok, that’s it for Arduino based robots, let’s share some of cool ones which are not based on Arduino custom board.

LittleBits (~ 99$ to ~200$)

image

LittleBits is a literal “plug and play” kit of circuits. Pieces connect magnetically, so you have something with lights, or a motor, etc. running in a flash. The best way to get and idea is to browse to the projects section and you’ll see some great projects. Is really cool for small kids, and it doesn’t require any programming skill at all.

Kano (~ 150$)

Ok, this is probably for bigger kids, but the speech line is great:

Let’s build a computer having fun like when we play with Lego.

And the best way to share this is with a Video

 

And that’s it. I’ll probably make an update adding some other tools we have like the Lego EV3, or other bots.

Important: All prices are estimates and must be used only as references. Some of them changes if you try to buy them from 3rd party retailers like Amazon.

Greetings @ Toronto

-El Bruno

References

#Coding4Fun – What’s There App, código fuente y ejemplo de uso de #VisionAPI

Hola !

Después del post de ayer, algunas personas me preguntaron un par de cuestiones sobre la App What’s There, así que aprovecharé este post para comentar algunos detalles de la misma.

Antes de crear la app, le expliqué a mi niño con un esquema simple, el funcionamiento de la App

  1. El smartphone sacará una foto
  2. La foto se procesará con Vision API
  3. Vision API nos retornará información sobre lo que se encontró en la foto
  4. El smartphone nos hablará con la información de la foto

image

Luego cree un dibujo muy simple y el Valentino entendió la idea Open-mouthed smile (espero tu tú también) Así que veamos un poco en detalle cada uno de los pasos de la App.

Empezamos por los prerequisitos, dentro de package manifest necesitamos activar las siguientes capabilities: Internet, Microphone y WebCam.

image

Para sacar una foto, he reutilizado una función que saca una foto, guarda la misma en un storage local y retorna el archivo creado. Además soporta sacar fotos en “modo silencioso”, en donde el usuario no se entera de que está sacando fotos; y en “modo normal” en el que aparece la ventana del SO para sacar fotos.

image

El siguiente paso es analizar la foto utilizando VisionAPI. Para esto necesitamos agregar el paquete NuGet Microsoft.ProjectOxford.Vision a nuestro proyecto.

image

Importante: Para poder utilizar la API necesitamos una cuenta activa en Cognitive Services y copiar las keys correspondientes, lo he explicado en este post.

Ahora si, ya podemos procesar la foto con Vision API. Las siguientes líneas son un ejemplo simple sobre como hacer esto. Es importante remarcar que dentro de Description nos encontramos con la colección de Captions. Cada Caption es una frase completa con la descripción de la foto.

image

Si queremos darle un toque personal a esta descripción, la colección Tags nos brinda los elementos más importantes encontrados con palabras sueltas.

Finalmente, para el proceso Text to Speech, usé la clase SpeechSynthesizer. Esta clase nos permite obtener el audio stream desde el texto y luego reproducir el mismo utilizando un Media Element.

image

Y esto es todo ! 4 simples pasos para crear una App más que divertida Open-mouthed smile

Saludos @ Toronto

-El Bruno

References