#Coding4Fun – How to control your #drone with 20 lines of code! (17/N)

Buy Me A Coffee

Hi !

Once we have the a custom vision trained model instance, we can use it to recognize objects from the drone camera feed. Read my previous posts for descriptions on these.

Another interesting scenario, is to save local files for every detected object. In the following code, I’ll save 2 different files for every detected object

  • A camera frame image, with a frame around the detected object
  • A plain text file with the JSON information

In the sample code below, the save process is in the lines 122-129. And, not in a fancy way, the files have the same name to correlate them.

drone recognized files

So let’s go to the full code:

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (16/N)

Buy Me A Coffee

Hi !

In my previous post, I shared an example where I analyzed the camera feed using a Image Recognition model created using Custom Vision. Today I’ll expand the sample, and show in real time the detected MVPs logos with a frame in the drone camera feed.

Let’s take a look at the demo working in the following image.

drone camera image analysis using custom vision and drawing frames for detected objects

In the top of the image, we can see the app console log, with the information received for each analyzed frame. When an image is detected, we can see the tag, the probability and the bounding box coordinates.

A sample JSON return string start like this one:

{
  "created": "2020-04-08T17:22:02.179359",
  "id": "",
  "iteration": "",
  "predictions": [
    {
      "boundingBox": {
        "height": 0.1979116,
        "left": 0.3235259,
        "top": 0.05847502,
        "width": 0.20438321
      },
      "probability": 0.89171505,
      "tagId": 0,
      "tagName": "MVP"
    },
    {
      "boundingBox": {
        "height": 0.2091526,
        "left": 0.65271178,
        "top": 0.0433814,
        "width": 0.17669522
      },
      "probability": 0.70330358,
      "tagId": 0,
      "tagName": "MVP"
    },

In order to position the frames in the correct location, I need to make some math using the current camera and image size and the returned bounding box values for, height, left, top and width. Lines 87-110.

resize_factor = 100

height = int(bb['height'] * resize_factor)
left = int(bb['left'] * resize_factor)
top = int(bb['top'] * resize_factor)
width = int(bb['width'] * resize_factor)

# adjust to size
camera_Width, 
height = int(height * camera_Heigth / 100)
left = int(left * camera_Width / 100)
top = int(top * camera_Heigth / 100)
width = int(width * camera_Width / 100)

# draw bounding boxes
start_point = (top, left)                 
end_point = (top + height, left + width) 
color = (255, 0, 0) 
thickness = 2                
cv2.rectangle(img, start_point, end_point, color, thickness)            

So let’s go to the full code:

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (15/N)

Buy Me A Coffee

Hi !

Let’s use Custom Vision to analyze the images from our drone camera. In this scenario, I created a custom model to recognize MVP awards from my MVP wall. I know, that’s bragging, but I like it.

Disclaimer: There are plenty of documentation and tutorials about Custom Vision. I won’t go deep on the steps about how to create a model. See references.

For my next scenario, I would assume that

  • You have created a model in Custom Vision
  • You have published the Custom Vision model, and have a HTTP endpoint
  • Or the model is exported as a docker image, and it’s running in a docker container. And we have a HTTP endpoint.

The code is similar to the one we used before. OpenCV to hookup the camera, commands to take off and land. Let me remark a couple of important lines in this code:

  • There are a couple of new references, mostly used for the process of the JSON response from the Custom Vision model.
  • Lines 146-155. Get the frame from the drone camera and save a local file. Apply a specific format to the file name, for demo purposes.
  • Lines 157-163. Make a HTTP POST call to analyze the saved file. Convert the result to a JSON object (room for improvement here), and analyze the JSON response.
  • Lines 70-85. Analyzed the JSON response from the Custom Vision model. Sort the results by probability and filter the results using a threshold (75). Return a string with the detected object.
  • Lines 165-178. Calculate and display FPS and detected objects.

A sample JSON return string start like this one:

{
  "created": "2020-04-08T17:22:02.179359",
  "id": "",
  "iteration": "",
  "predictions": [
    {
      "boundingBox": {
        "height": 0.1979116,
        "left": 0.3235259,
        "top": 0.05847502,
        "width": 0.20438321
      },
      "probability": 0.89171505,
      "tagId": 0,
      "tagName": "MVP"
    },
    {
      "boundingBox": {
        "height": 0.2091526,
        "left": 0.65271178,
        "top": 0.0433814,
        "width": 0.17669522
      },
      "probability": 0.70330358,
      "tagId": 0,
      "tagName": "MVP"
    },

So let’s go to the full code:

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Event – Let’s rock some #AI and #ComputerVision at @devdotnext #devdotnext2020

Buy Me A Coffee

Hi!

In a couple of weeks, I’ll be visiting one of the biggest events in Broomfield, Colorado: @devdotnext.

DevDotNext

DevDotNext hosts 150+ 75-minutes Presentations, 4 Keynotes/Panels, and 11 All-day Pre-Conference Workshops.

The schedule is available here https://www.devdotnext.com/schedule with some of this amazing topics:

  • Languages
  • Design and Architecture Cloud
  • Server-Side
  • Frontend
  • DevOps
  • Microservices
  • Machine Learning
  • Testing
  • Being agile
  • Leadership
  • And more

I’ll be sharing some experiences and insights around Machine Learning, Computer Vision and IoT.

Registration and event details

Hurry up, regular registration ends soon.
Register at https://www.devdotnext.com/register

Hope to see you there. Use coupon code LEARNWITHME

Happy coding!

Greetings

El Bruno

#Event – I will be speaking at @devdotnext #devdotnext2020 this March in Colorado.

Buy Me A Coffee

Hi!

In a couple of weeks, I’ll be visiting one of the biggest events in Broomfield, Colorado: @devdotnext.

DevDotNext

DevDotNext hosts 150+ 75-minutes Presentations, 4 Keynotes/Panels, and 11 All-day Pre-Conference Workshops.

Topics covered include:

  • Languages
  • Design and Architecture Cloud
  • Server-Side
  • Frontend
  • DevOps
  • Microservices
  • Machine Learning
  • Testing
  • Being agile
  • Leadership
  • And more

I’ll be sharing some experiences and insights around Machine Learning, Computer Vision and IoT. Here are my session details.

How a PoC at home can scale to Enterprise Level using Custom Vision APIs (v2!)

It all started with a DIY project to use Computer Vision for security cameras at home. A custom Machine Learning model is the core component used to analyze pictures to detect people, animals and more in a house environment. The AI processing is performed at the edge, in dedicated hardware and the collected information is stored in the cloud.

The same idea can be applied to several CCTV scenarios, like parking lots, train stations, malls and more. However, moving this into enterprise scale brings a set of challenges, which are going to be described and explained in this session.

In this new version of the session, we will start from scratch and create a complete “Parking Garage Open Space Tracker” solution with live devices and live cars (small ones, of course)

Registration and event details

Hurry up, regular registration ends soon.
Register at https://www.devdotnext.com/register

Happy coding!

Greetings

El Bruno

#CustomVision – Label Images with suggested tags. Cool update for my #MSIgnite sessions

Buy Me A Coffee

Hi!

I’m updating some of my demos for Microsoft Ignite and I found an amazing new feature in Custom Vision: Suggested Tags (see references). This feature is super useful in scenarios for automatic detection, like the parking lot demo. I’ll use the official documentation to describe this feature

When you tag images for a Custom Vision model, the service uses the latest trained iteration of the model to predict the labels of untagged images. It then shows these predictions as suggested tags, based on the selected confidence threshold and prediction uncertainty. You can then either confirm or change the suggestions, speeding up the process of manually tagging the images for training.

Label images faster with suggested tags

And, as usual, let’s use 2 images to describe this. Once I add a new image to my Custom Vision project, I can start to select object and tag them. However, if I already trained my project, I will also see the [Suggested object on] option.

Custom Vision select objects and apply tags

With the default threshold value of 66%, the auto label feature does not detect any area. However, if I low the level, in example to 28%, it will automatically detect one of the parking slots: Slot 3. Once I’m happy with the suggested objects, I can confirm these objects and that’s it! Super easy.

Custom Vision enable suggested objects and low threshold

This feature is amazing, and I’m looking forward to using it in real projects to see how much time saves in image labeling scenarios.

Bonus: Below you can see the before and after of the demo project. My daughter also decorated the new parking lot box, with some IronMan content. I’ll need to figure out how to connect this with my session speech!

Happy coding!

Greetings @ Toronto

El Bruno

References

#CustomVision – Sort and manage Json response estimation in a single line of code

Buy Me A Coffee

Hi!

On my Custom Vision samples, I usually send an image to a CustomVision.ai HTTP Endpoint, and I process the Json result. The results are very easy to understand, however, I created a C# converter class to help with the Custom Vision results.

To create this class I navigate: http://json2csharp.com/, and paste a sample result and make some changes on the result. The output and useful class is this one:

It’s a very simple class, and the best way to describe it, is to show an usage scenario

The main remarks points are

  • Lines 1 to 9, open a local file, create a HTTP client and make the HTTP Post request
  • Line 12, convert the json response (string) to a C# object and then get the best prediction
  • Where the best prediction is a single Linq code sorting the predictions by probability and selecting the 1t one.

Easy and amazing!

Happy Coding!

Greetings @ Burlington

El Bruno

References

#Event – Materials and Resources used during my #CustomVision and #AI session at #CDC2019

Hi!

Drafting these in the airplane, and also drafting a bigger post about the full and amazing experience at the Caribbean Developer Conference. So, I’ll start with the usual slides and materials, and also use this post later as reference for the full experience

Slides

Code

https://github.com/elbruno/events/tree/master/2019%2010%2004%20CDC

Links

Tweets

Greetings @ Toronto

El Bruno

References

My posts on Raspberry Pi

#Event – I’ll be at the Caribbean Developer Conference on October ! #CDC2019

Hi !

Wow, I’ll completely amazed because I’ll have the chance to share some Machine Learning, Custom Vision and other experiences in the Caribbean Developer Conference in October.

Caribbean Developer Conference

This event is great and as usual, the list of speakers is AMAZING!

I’ll share more details later, and in the meantime, if you want to know more, the 2018 video recap is a great way to

Happy Coding!

Greetings @ Toronto

El Bruno

#Event – Resources for the sessions about #DeepLearning and #CustomVision at the @ChicagoCodeCamp

Hi!

Another post-event post, this time with a big thanks to the team behind one of the most amazing event I’ve been this year: Chicago CodeCamp.

I had the chance to meet a lot of amazing people, to learn a lot during the sessions and also to visit the great city of Chicago.

As usual, now it’s time to share slides, code and more.

Deep Learning for Everyone? Challenge Accepted!

Let’s start with the Deep Learning resources


Demos Source Code: https://github.com/elbruno/events/tree/master/2019%2005%2011%20Chicago%20CodeCamp%20Deep%20Learning

Session: How a PoC at home can scale to Enterprise Level using Custom Vision APIs

And also the [How a PoC at home can scale to Enterprise Level using Custom Vision APIs] resources

Demos Source Code: https://github.com/elbruno/events/tree/master/2019%2005%2011%20Chicago%20CodeCamp%20CustomVision

And finally, some Machine Learning.Net, Deep Learning and Custom Vision resources:

My posts on Custom Vision and ONNX

  1. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  2. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  3. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, drawing frames
  4. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, calculate FPS
  5. Can’t install Docker on Windows 10 Home, need Pro or Enterprise
  6. Running a Custom Vision project in a local Docker Container
  7. Analyzing images in a Console App using a Custom Vision project in a Docker Container
  8. Analyzing images using PostMan from a Custom Vision project hosted in a Docker Container
  9. Building the CustomVision.ai project in Docker in a RaspberryPi
  10. Container dies immediately upon successful start in a RaspberryPi. Of course, it’s all about TensorFlow dependencies
  11. About ports, IPs and more to access a container hosted in a Raspberry Pi
  12. Average response times using a CustomVision.ai docker container in a RaspberryPi and a PC

Windows 10 and YOLOV2 for Object Detection Series

See you next one in Chicago for some Deep Learning fun!

Happy coding!

Greetings @ Toronto

El Bruno