Please each event location for specific details on time and streaming options.
Let’s code a drone to follow faces syncing everything with Azure IoT
You can control a drone using 20 lines of code. That’s the easy part. However, adding extra features like face or object detection and program the drone to follow and object or a face requires … another 20 lines of code! During this workshop we will review how to connect to a drone, how to send and receive commands from the drone, how to read the camera video feed and how to apply AI on top of the camera feed to recognize objects or faces. We will use a simple house drone ($100) and Python. And, when we review some enterprise scenarios, we will use Azure IoT to sync the drone information in IoT mode. Let’s build this!
For this session I’ll explain how to create a SDK from zero to control a DJI Tello drone, and the last past will include some examples on how to connect the drone reported information to Azure IoT.
And this is probably the last one my Microsoft MVP room-office (see below), there are some changes in the near future, so let’ rock these 2 sessions !
¿Con ganas de ponerte al día?
En Lemoncode te ofrecemos formación online impartida por profesionales que se baten el cobre en consultoría:
Today I’ll step back a couple of posts, and add 2 simple lines to allow me to save a video file from the Drone camera. This is a request, and it’s makes a lot of sense to have recorded a file with the drone camera.
The video will later contains detected objects and more, so let’s go with the code. All the magic happens here:
Lines 97-103. Open the drone camera stream, and also opens a video output stream to save the video file.
Lines 123-124. Display the camera feed and add the camera frame into the output video file.
Lines 136-139. Dispose objects, and close the video output file.
In my previous post, I shared an example where I analyzed the camera feed using a Image Recognition model created using Custom Vision. Today I’ll expand the sample, and show in real time the detected MVPs logos with a frame in the drone camera feed.
Let’s take a look at the demo working in the following image.
In the top of the image, we can see the app console log, with the information received for each analyzed frame. When an image is detected, we can see the tag, the probability and the bounding box coordinates.
In order to position the frames in the correct location, I need to make some math using the current camera and image size and the returned bounding box values for, height, left, top and width. Lines 87-110.