#Python – #FastAPI Webserver sharing information from values in a different thread

Buy Me A Coffee

Hi !

After my yesterday post using Flask, I was sure that a FastAPI version will be needed, so here it goes:

I have a common scenario which involves:

  • A sensor collecting information
  • A web-server publishing the sensor information

Read my previous posts to understand why I think this is the simple way to solve this: Multi-threading.

  • Thread 1, where an infinite loop request information from the sensor, and stores the latest value to be shared.
  • Thread 2, where a web-server process requests and share the latest sensor information.

Easy ! And after a couple of tests, I manage to create a single file implementing this:

So at this point, you may think: why does El Bruno need this? So, let’s share an image that I’ll use in future posts:

thermal camera demo

Happy coding!

Greetings

El Bruno

Resources

#Python – Flask Webserver sharing information from values in a different thread

Buy Me A Coffee

Hi !

I have a common scenario which involves:

  • A sensor collecting information
  • A web-server publishing the sensor information

This is simple, however the sensor does not support constants requests, and it may return a “too many requests” response when called directly. The idea to get the sensor information directly in the web-request was not valid from day zero.

I asked for support / guidance and my amazing and smart friends show me the concept of OVER ENGINEERING. Dockers, Compose, Queues, Coordination and more was part of some of the proposals. However, they also show me the most easy and simple way to solve this: Multi-threading.

  • Thread 1, where an infinite loop request information from the sensor, and stores the latest value to be shared.
  • Thread 2, where a web-server process requests and share the latest sensor information.

Easy ! And after a couple of tests, I manage to create a single file implementing this:

So at this point, you may think: why does El Bruno need this? So, let’s share an image that I’ll use in future posts:

thermal camera demo

Note: Some very smart people also suggested to implements this using FastAPI instead of Flask, so a future post may include this.

Happy coding!

Greetings

El Bruno

Resources

#Python – Update all Python Packages using pip with #PowerShell

Buy Me A Coffee

Hi !

Here is a small script that I always go back to search in my notes. I found this on internet sometime ago, and is the easiest way to update all packages in a specific environment using pip and PowerShell:

pip freeze | %{$_.split('==')[0]} | %{pip install --upgrade $_}

Depending on your environment, it may take some time. The output is cool, your packages are updated !

Happy coding!

Greetings

El Bruno

#Coding4Fun – How to control your #drone with 20 lines of code! (20/N)

Buy Me A Coffee

Hi !

We already have the drone camera feed ready to process, so let’s do some Image Segmentation today. As usual, let’s start with the formal definition of Image Segmentation

In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple segments (sets of pixels, also known as image objects). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.[1][2] Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.

The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image (see edge detection). Each of the pixels in a region are similar with respect to some characteristic or computed property, such as color, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristic(s).[1] When applied to a stack of images, typical in medical imaging, the resulting contours after image segmentation can be used to create 3D reconstructions with the help of interpolation algorithms like marching cubes.[3]

Wikipedia, Image Segmentation

The technique is amazing, and once is attached to the drone camera, we can get something like this:

I used a Python library to make most of the work: PixelLib. It was created by an amazing set of colleagues, so please check the references and take a look at the project description.

PixelLib: is a library built for an easy implementation of Image Segmentation in real life problems. PixelLib is a flexible library that can be integrated into software solutions that require the application of Image Segmentation.

PixelLib

Once I have all the pieces together, I pulled a Pull Request with a single change to allow the use of OpenCV and webcam camera frames and I got a basic demo up and running.

Let’s review the code

  • Line 147. That’s it, a single line which performs the instance segmentation, and also display the bounding boxes.

Sample Code

I’ll show a couple of live demos of this in my next Global AI Community, Drone AI demos. Check my next event sections!

Happy coding!

Greetings

El Bruno

References

#Windows10- #WindowsTerminal and Anaconda, bonus: start an Anaconda Virtual Environment

Buy Me A Coffee

Hi !

Windows Terminal (WT) is one of the coolest tools I’ve using in the last couple of years. I’m not an expert, and not even a fan of CLIs, however I assume working with WT is super cool.

Bonus: If you speak Spanish, I shared my own thoughts about this with Juan and Eduard in a podcast episode here.

On top of this, I also use Anaconda a lot. And, now that we can launch and use Anaconda from a PowerShell Prompt, I think I should spend some time trying to figure out how to have Anaconda inside Windows Terminal.

windows terminal new profiles for Anaconda

I will assume that you know the basis of Windows Terminal profiles. As a WT user, we can create as many profiles as we want to have different tools available. This is my starting point to use Anaconda and Windows Terminal.

Note: Check References for Donovan Brown post about working with profiles.

Create a new profile to launch Anaconda in Windows Terminal

Let’s go to Windows Terminal Settings to create a new profile for Anaconda. In order to do this, I’ll copy and paste an existing profile, update the Guid and complete the following values.

  • guid: create and paste a new Guid
  • name: I defaulted this to Anaconda
  • commandline: this is the tricky one. So I’ll describe the steps below.

I browse to [C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Anaconda3 (64-bit)] and view the properties for the [Anaconda PowerShell Prompt]. Then copy the Target value and use the value in the commandline element.

get properties from shortcut for Anaconda Command Prompt

That should be enough to have a new profile to work with Anaconda. When we open a new tab for Anaconda, we can see that it open the standard virtual environment (base).

default anaconda prompt in windows terminal

1st step done, have Anaconda integrated with Windows Terminal !

Start an Anaconda Virtual Environment in a new Windows Terminal Tab

Moving on, I’ll create and activate a new Virtual Environment named [drone], with the following commands.

conda create -n drone pip python=3.7
conda activate drone

Now I have a new environment named [drone], and I want to have a shortcut in Windows Terminal to open a new tab with this VirtualEnv activated. I copy & paste the definition of the Anaconda profile, used a new Guid, and added, the following command to the end of the line:

conda activate drone
virutal env openend in windows terminal

As you can see in the previous image, when I open a new tab for my Drone Virtual Env, I already have it loaded. I also added a [cls] command at the end, so I can start with a clean environment.

Finally, and for reference, this is my current Windows Terminal settings file including the 2 Anaconda profiles.

Happy coding!

Greetings

El Bruno

References

#Python – Easy way to install #OpenCV and #TensorFlow with Anaconda

Buy Me A Coffee

Hi !

After sharing a couple of sessions using the Drone and working with the drone camera, a couple of people contacted me with issues while they try to install OpenCV and/or TensorFlow, to be used with Python.

There are plenty of tutorials about this, so I will share a very easy one.

1st step is to install Anaconda (see references). Once Anaconda is installed, let’s launch the Anaconda PowerShell Prompt to install dependencies.

The 2 main commands to install the desired packages are

# install TensorFlow
pip install tensorflow --user

# install OpenCV
pip install opencv-python

And that’s it! All the packages should be installed. An easy way to test if both packages are installed is to launch python and

# check tensorflow, and display tensorflow version
import tensorflow as tf
tf.__version__

# check OpenCV, and display OpenCV version
import cv2
cv2.__version__

The output should be similar to this one

anaconda check tensorflow and opencv with python

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (19/N)

Buy Me A Coffee

Hi !

Today I face another challenge: I needed to overlay an image on top of another. Something like this.

camera overlay images with python

Lucky for me, and as usual, OpenCV allow us to do this with a few lines of code. Let’s take a look.

  • Line 8. Define a custom size for all the images: background image and camera feed frame.
  • Lines 10-12. Load and resize background image.
  • Line 21. Overlay the camera frame and the background image.

Sample Code

And from here, I’ll update some posts with the drone camera.

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (18/N)

Buy Me A Coffee

Hi !

Today I’ll step back a couple of posts, and add 2 simple lines to allow me to save a video file from the Drone camera. This is a request, and it’s makes a lot of sense to have recorded a file with the drone camera.

The video will later contains detected objects and more, so let’s go with the code. All the magic happens here:

  • Lines 97-103. Open the drone camera stream, and also opens a video output stream to save the video file.
  • Lines 123-124. Display the camera feed and add the camera frame into the output video file.
  • Lines 136-139. Dispose objects, and close the video output file.

Happy coding!

Greetings

El Bruno

#Coding4Fun – How to control your #drone with 20 lines of code! (17/N)

Buy Me A Coffee

Hi !

Once we have the a custom vision trained model instance, we can use it to recognize objects from the drone camera feed. Read my previous posts for descriptions on these.

Another interesting scenario, is to save local files for every detected object. In the following code, I’ll save 2 different files for every detected object

  • A camera frame image, with a frame around the detected object
  • A plain text file with the JSON information

In the sample code below, the save process is in the lines 122-129. And, not in a fancy way, the files have the same name to correlate them.

drone recognized files

So let’s go to the full code:

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (16/N)

Buy Me A Coffee

Hi !

In my previous post, I shared an example where I analyzed the camera feed using a Image Recognition model created using Custom Vision. Today I’ll expand the sample, and show in real time the detected MVPs logos with a frame in the drone camera feed.

Let’s take a look at the demo working in the following image.

drone camera image analysis using custom vision and drawing frames for detected objects

In the top of the image, we can see the app console log, with the information received for each analyzed frame. When an image is detected, we can see the tag, the probability and the bounding box coordinates.

A sample JSON return string start like this one:

{
  "created": "2020-04-08T17:22:02.179359",
  "id": "",
  "iteration": "",
  "predictions": [
    {
      "boundingBox": {
        "height": 0.1979116,
        "left": 0.3235259,
        "top": 0.05847502,
        "width": 0.20438321
      },
      "probability": 0.89171505,
      "tagId": 0,
      "tagName": "MVP"
    },
    {
      "boundingBox": {
        "height": 0.2091526,
        "left": 0.65271178,
        "top": 0.0433814,
        "width": 0.17669522
      },
      "probability": 0.70330358,
      "tagId": 0,
      "tagName": "MVP"
    },

In order to position the frames in the correct location, I need to make some math using the current camera and image size and the returned bounding box values for, height, left, top and width. Lines 87-110.

resize_factor = 100

height = int(bb['height'] * resize_factor)
left = int(bb['left'] * resize_factor)
top = int(bb['top'] * resize_factor)
width = int(bb['width'] * resize_factor)

# adjust to size
camera_Width, 
height = int(height * camera_Heigth / 100)
left = int(left * camera_Width / 100)
top = int(top * camera_Heigth / 100)
width = int(width * camera_Width / 100)

# draw bounding boxes
start_point = (top, left)                 
end_point = (top + height, left + width) 
color = (255, 0, 0) 
thickness = 2                
cv2.rectangle(img, start_point, end_point, color, thickness)            

So let’s go to the full code:

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References