#Event – Resources used during the #GlobalAI Tour, Buenos Aires, May 30. Let’s code a drone to follow faces! Using AI, Python, containers and more.

Buy Me A Coffee
Globa AI Community on Virtual Tour Logo

Hi !

The event is complete, and now is time to share the resources I used during the session

Slides

Code

https://github.com/elbruno/events/tree/master/2020%2005%2030%20Global%20AI%20Tour%20BsAs%20Drone%20AI

Resources

Happy coding!

Greetings

El Bruno

#Event – #GlobalAI On Tour, Argentina. Vamos a programar a un dron en Spanish !

Buy Me A Coffee
Globa AI Community on Virtual Tour Logo

Buenas !

Cambios de ultimo momento. Mañana estaré como uno de los ponentes virtuales en el Global AI on Tour para Argentina. Estaré hablando de drones, y más que hablando más bien programando un poco un drone para pasarlo bien. Y utilizando un poco de AI para hacer esto más divertido.

Para subir el nivel, en la agenda pueden ver que los demás speakers van a tocar temas mucho más serios e interesantes.

Global AI On Tour Argentina

Happy coding!

Greetings

El Bruno

#Coding4Fun – How to control your #drone with 20 lines of code! (17/N)

Buy Me A Coffee

Hi !

Once we have the a custom vision trained model instance, we can use it to recognize objects from the drone camera feed. Read my previous posts for descriptions on these.

Another interesting scenario, is to save local files for every detected object. In the following code, I’ll save 2 different files for every detected object

  • A camera frame image, with a frame around the detected object
  • A plain text file with the JSON information

In the sample code below, the save process is in the lines 122-129. And, not in a fancy way, the files have the same name to correlate them.

drone recognized files

So let’s go to the full code:

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (16/N)

Buy Me A Coffee

Hi !

In my previous post, I shared an example where I analyzed the camera feed using a Image Recognition model created using Custom Vision. Today I’ll expand the sample, and show in real time the detected MVPs logos with a frame in the drone camera feed.

Let’s take a look at the demo working in the following image.

drone camera image analysis using custom vision and drawing frames for detected objects

In the top of the image, we can see the app console log, with the information received for each analyzed frame. When an image is detected, we can see the tag, the probability and the bounding box coordinates.

A sample JSON return string start like this one:

{
  "created": "2020-04-08T17:22:02.179359",
  "id": "",
  "iteration": "",
  "predictions": [
    {
      "boundingBox": {
        "height": 0.1979116,
        "left": 0.3235259,
        "top": 0.05847502,
        "width": 0.20438321
      },
      "probability": 0.89171505,
      "tagId": 0,
      "tagName": "MVP"
    },
    {
      "boundingBox": {
        "height": 0.2091526,
        "left": 0.65271178,
        "top": 0.0433814,
        "width": 0.17669522
      },
      "probability": 0.70330358,
      "tagId": 0,
      "tagName": "MVP"
    },

In order to position the frames in the correct location, I need to make some math using the current camera and image size and the returned bounding box values for, height, left, top and width. Lines 87-110.

resize_factor = 100

height = int(bb['height'] * resize_factor)
left = int(bb['left'] * resize_factor)
top = int(bb['top'] * resize_factor)
width = int(bb['width'] * resize_factor)

# adjust to size
camera_Width, 
height = int(height * camera_Heigth / 100)
left = int(left * camera_Width / 100)
top = int(top * camera_Heigth / 100)
width = int(width * camera_Width / 100)

# draw bounding boxes
start_point = (top, left)                 
end_point = (top + height, left + width) 
color = (255, 0, 0) 
thickness = 2                
cv2.rectangle(img, start_point, end_point, color, thickness)            

So let’s go to the full code:

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (15/N)

Buy Me A Coffee

Hi !

Let’s use Custom Vision to analyze the images from our drone camera. In this scenario, I created a custom model to recognize MVP awards from my MVP wall. I know, that’s bragging, but I like it.

Disclaimer: There are plenty of documentation and tutorials about Custom Vision. I won’t go deep on the steps about how to create a model. See references.

For my next scenario, I would assume that

  • You have created a model in Custom Vision
  • You have published the Custom Vision model, and have a HTTP endpoint
  • Or the model is exported as a docker image, and it’s running in a docker container. And we have a HTTP endpoint.

The code is similar to the one we used before. OpenCV to hookup the camera, commands to take off and land. Let me remark a couple of important lines in this code:

  • There are a couple of new references, mostly used for the process of the JSON response from the Custom Vision model.
  • Lines 146-155. Get the frame from the drone camera and save a local file. Apply a specific format to the file name, for demo purposes.
  • Lines 157-163. Make a HTTP POST call to analyze the saved file. Convert the result to a JSON object (room for improvement here), and analyze the JSON response.
  • Lines 70-85. Analyzed the JSON response from the Custom Vision model. Sort the results by probability and filter the results using a threshold (75). Return a string with the detected object.
  • Lines 165-178. Calculate and display FPS and detected objects.

A sample JSON return string start like this one:

{
  "created": "2020-04-08T17:22:02.179359",
  "id": "",
  "iteration": "",
  "predictions": [
    {
      "boundingBox": {
        "height": 0.1979116,
        "left": 0.3235259,
        "top": 0.05847502,
        "width": 0.20438321
      },
      "probability": 0.89171505,
      "tagId": 0,
      "tagName": "MVP"
    },
    {
      "boundingBox": {
        "height": 0.2091526,
        "left": 0.65271178,
        "top": 0.0433814,
        "width": 0.17669522
      },
      "probability": 0.70330358,
      "tagId": 0,
      "tagName": "MVP"
    },

So let’s go to the full code:

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Event – Recorded session (spanish) and resources for the #GlobalAICommunity Virtual Tour, Monterrey. Vamos a programar a un dron para que siga rostros!

Buy Me A Coffee
Globa AI Community on Virtual Tour Logo

Hi !

The event is complete, and now is time to share the recorded session and the resources I used during the session

Recorded Session (Spanish)

Slides

Code

https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Resources

Happy coding!

Greetings

El Bruno

#Event – #GlobalAICommunity More Drones and AI, pero esta vez en español ! el 18 de Abril para Global AI On Tour MTY

Buy Me A Coffee
Globa AI Community on Virtual Tour Logo

Hola !

Ya estaba por guardar el dron, eliminar los recursos de Azure y pintar mi oficina, cuando tuve que poner esto en pausa por un excelente motivo.

El próximo 18 de Abril estará hablando de drones, Inteligencia Artificial, Docker, y otras sorpresas para el evento gratuito de Global AI On Tour Monterrey !

Global AI OnTour Monterrey !

Y no soy solo yo, el evento transcurre durante 8 días con sesiones diarias sobre AI, que tocan temas como Azure Cognitive Services, Azure BlockChain, Redes Neuronales, IoT, Alexa Skills y mucho más !

Dadle un vistazo a la Agenda y no dudes en registrarte.

Los espero virtualmente!

Happy coding!

Greetings

El Bruno

#Coding4Fun – How to control your #drone with 20 lines of code! (14/N) – #GlobalAICommunity Virtual Tour, recorded session

Buy Me A Coffee
Globa AI Community on Virtual Tour Logo

Hi !

Before moving forward with the Coding4Fun series, I think this video is a great checkpoint of the posts that I’ve writen until today

Happy coding!

Greetings

El Bruno

#Event – Resources used during the #GlobalAICommunity Virtual Tour, April 8th. Let’s code a drone to follow faces! Using AI, Python, containers and more.

Buy Me A Coffee
Globa AI Community on Virtual Tour Logo

Hi !

The event is complete, and now is time to share the resources I used during the session

Slides

Code

https://github.com/elbruno/events/tree/master/2020%2004%2008%20Global%20AI%20Community%20Drones%20AI

Resources

Happy coding!

Greetings

El Bruno

#Event – #GlobalAICommunity Virtual Tour, April 8th. Let’s code a drone to follow faces! Using AI, Python, containers and more.

Buy Me A Coffee
Globa AI Community on Virtual Tour Logo

Hi !

On the 8th of April 2020 the Global AI Community is hosting a 30 hour live event across timezones in different languages. Mark the date in your calendar, subscribe to our YouTube channel and tune in.

We updated the agenda with an amazing set of great speakers, and super cool sessions. Take a look at the agenda and subscribe to the event here:

https://live.globalai.community/

And I realize that I forget to share details about my session, so here it is:

Let’s code a drone to follow faces! Using AI, Python, containers and more

You can control a drone using 20 lines of code. That’s the easy part. However, adding extra features like face or object detection and program the drone to follow and object or a face requires … another 20 lines of code!
During this workshop we will review how to connect to a drone, how to send and receive commands from the drone, how to read the camera video feed and how to apply AI on top of the camera feed to recognize objects or faces. We will use a simple house drone ($100) and Python. And, when we review some enterprise scenarios, we will use Azure Custom Vision in containers for some specific object recognition stories.
Let’s build this!

Happy coding!

Greetings

El Bruno