#Python – Working with dates 📅, formats, and subtract for time difference. Timedelta type rocks!

Buy Me A Coffee

Hi !

I’m switching from C#9 to Python to write this down, so I avoid to search and write this from scratch again and again. My scenario includes very simple operations with dates,

  • Tag a start date time
  • Do some process, I’ll fake it with random sleep
  • Tag a end date time
  • Calculate the different between them
  • Process and show the difference with a specific format

For this I’ll use the datetime type, and a cool feature using datetime is that we can directly substract 2 dates. In example:

start = datetime.datetime.utcnow()
time.sleep(10)
end = datetime.datetime.utcnow()

delta = end - start

In this code 👆, the delta variable is a timedelta type. This is not a standard datetime, in C# we know this as TimeSpan. We can access the internal values, and get some details around hours, minutes, seconds and millisecons:

def get_timedelta_values(delta):
    # Get the hours, minutes, seconds and milliseconds
    millis           = round(delta.microseconds/1000, 0)
    minutes, seconds = divmod(delta.seconds, 60)
    hours, minutes   = divmod(minutes, 60)
    return hours, minutes, seconds, millis    

def get_timedelta_min_and_sec(delta):
    hours, minutes, seconds, millis = get_timedelta_values(delta)
    return str(f'{str(minutes).zfill(2)}:{str(seconds).zfill(2)}')

The 2nd function, get the delta values and creates a standard output with “mm:ss” format. So, not tricky and easy to move forward.

Another cool feature of the timedelta is that support adding. So as a final example, I’ll generate some random deltas in a loop, and add them in a totalTime var.

import time
import datetime
from random import randint

def get_timedelta_values(delta):
    # Get the hours, minutes, seconds and milliseconds
    millis           = round(delta.microseconds/1000, 0)
    minutes, seconds = divmod(delta.seconds, 60)
    hours, minutes   = divmod(minutes, 60)
    return hours, minutes, seconds, millis    

def get_timedelta_min_and_sec(delta):
    hours, minutes, seconds, millis = get_timedelta_values(delta)
    return str(f'{str(minutes).zfill(2)}:{str(seconds).zfill(2)}')

totalTime = None
i = 0
while i < 25:
    i = i + 1
    rndSleep = randint(1, 10)
    print(f'current iteration: {i} - rnd sleep: {rndSleep}')

    start = datetime.datetime.utcnow()
    time.sleep(rndSleep)
    end = datetime.datetime.utcnow()

    delta = end - start

    if (totalTime is None):
        totalTime = delta
    else:
        totalTime += delta

    print(f'  >> delta     : {get_timedelta_min_and_sec(delta)}')
    print(f'  >> total time: {get_timedelta_min_and_sec(totalTime)}')

The output as expected shows the delta with specific format in each iteration and also the accumulator for the total time

python adding time delta vars

So, here it is, a simple example that will save me some time in the future for sure !

Happy coding!

Greetings

El Bruno

¿Con ganas de ponerte al día?

En Lemoncode te ofrecemos formación online impartida por profesionales que se baten el cobre en consultoría:

References

#OpenCV – Open a video file 🎥 and save each frame as a PNG 🖼 file to a folder 📂#Python

Buy Me A Coffee

Hi !

This is a non-usual scenario, however I’m on a point where I need to extract all the frames from a video file. The reason: some of these frames will be used to train a Machine Learning model.

There are tools that can do this, however it’s a nice moment to do some OpenCV code. Let’s go for it. A couple of remarks

  • Video file must be in the same folder as python file, and the name is defined in video_file, line 11
  • I resize the frame to 640 x 480, remove line 20 and fix vars name
  • There is a live preview of the video, comment line 27 to avoid this
  • You can stop the process at any time pressing the Q letter
# Bruno Capuano 2020
# open a video file and save each frame as a PNG file to a folder

import cv2
import os

# Camera Settings
camera_Width  = 640 # 1024 # 1280 # 640
camera_Heigth = 480 # 780  # 960  # 480
frameSize = (camera_Width, camera_Heigth)
video_file = "03.mp4"
video_capture = cv2.VideoCapture(video_file)

i = 0
while video_capture.isOpened():

    ret, frameOrig = video_capture.read()
    if ret == True:
        # resize frame, optional you may not need this
        frame = cv2.resize(frameOrig, frameSize)

        i += 1
        imgNumber = str(i).zfill(5)
        frameImageFileName = str(f'03\image{imgNumber}.png')
        cv2.imwrite(frameImageFileName, frameOrig)

        cv2.imshow('Video', frame)
    else:
        break

    # key controller
    key = cv2.waitKey(1) & 0xFF    
    if key == ord("q"):
        break

video_capture.release()
cv2.destroyAllWindows()

Happy coding!

Greetings

El Bruno

#Event – Resources used during the session “Hack a drone, hack the camera and use AI” at the Global AI Tour, Lahore Pakistan, 2020

Buy Me A Coffee

Hi !

I had a great time early in the day with my Microsoft Student Partners from Lahore Pakistan, for the Global AI Tour. As usual, time for slides and code:

Slides

Code

https://github.com/elbruno/events/tree/main/2020%2011%2030%20Global%20AI%20Tour%20Pakistan

Resources

Happy coding!

Greetings

El Bruno

#Python – Convert an #OpenCV frame to PIL image and viceversa

Buy Me A Coffee

Hi !

It’s amazing how I can code and have fun for hours, and them suddenly spent 2 hours trying to manually do something, that it’s already resolved in all internet. Today’s challenge:

Convert an OpenCV frame to PIL image

And, why ? very simple, OpenCV follows the BGR convention and PIL follows RBG convention. So even, if the array have the same values, they are in different formats. So, here it goes both scenarios:

OpenCV image to PIL Image

# importing Image from PIL
img_tmp = cv2.cvtColor(img_openCV, cv2.COLOR_BGR2RGB)
im_PIL = Image.fromarray(img_tmp)

PIL Image to OpenCV Image

# using numpy
img_openCV = np.asarray(im_pil)

And, that’s it ! I’ll leave it here and I know I’ll save a couple of hours to myself in the near future.

Happy coding!

Greetings

El Bruno

#YoloV3 – Rapid Object detection with 5 lines of code ( @code and #Python for the win! )

Buy Me A Coffee

Hi !

Sometime ago, I wrote a simple Python class a wrapper for YoloV3. This allows me to write 5 lines of code to analyze an image. Once I use this class, I only press F5 in Visual Studio Code and it’s magic:

detecting images with 3 lines of code

A sample usage will be similar to this one. The only required parameter for the YoloV3 analyzer is the confidence. 0.5 is good enough for this demo:

# Bruno Capuano 2020
# simple implementation for YoloV3 object detection
import cv2
from yoloanalyzer import yoloV3_analyzer
image_path = "02.jpg"
# init Analyzer with confidence 50%
ya = yoloV3_analyzer(0.5)
# analyze and show image
image = cv2.imread(image_path)
newImage = ya.ImageProcess(image)
cv2.imshow("Rapid YoloV3 demo", newImage)
# wrap up
cv2.waitKey()
cv2.destroyAllWindows()

And the output is rocking, not only cats, also dogs and humans !

I added a couple of parameters to define if we want labels and bounding boxes in the output image. The same image without bounding boxes will be:

And finally, the main class to perform this. It’s a very simple one, and feel free to use it and remember that you must check the official YoloV3 repository to get the files:

  • coco.names
  • yolov3.cfg
  • yolov3.weights
# Bruno Capuano 2020
# performs object detection using YoloV3 in an image and return the processed image
import imghdr
import os
import numpy as np
import cv2
import time
class yoloV3_analyzer:
def __init__(self, confidence):
self.confidence = confidence
def InitYoloV3(self):
#global net, ln, LABELS
self.weights = "yolov3.weights"
self.config = "yolov3.cfg"
self.labelsPath = "coco.names"
self.LABELS = open(self.labelsPath).read().strip().split("\n")
self.COLORS = np.random.uniform(0, 255, size=(len(self.LABELS), 3))
self.net = cv2.dnn.readNetFromDarknet(self.config, self.weights)
self.ln = self.net.getLayerNames()
self.ln = [self.ln[i[0] 1] for i in self.net.getUnconnectedOutLayers()]
def ImageProcess(self, image, showLabels = True, showBoundingBox = True):
# Init YOLO if needed
if(self.net is None):
self.InitYoloV3()
(H, W) = image.shape[:2]
frame = image.copy()
blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)
self.net.setInput(blob)
starttime = time.time()
layerOutputs = self.net.forward(self.ln)
stoptime = time.time()
print("FPS: {:.4f}".format((stoptimestarttime)))
confidences = []
outline = []
class_ids = []
for output in layerOutputs:
for detection in output:
scores = detection[5:]
maxi_class = np.argmax(scores)
confidence = scores[maxi_class]
if confidence > self.confidence:
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")
x = int(centerX (width / 2))
y = int(centerY (height / 2))
outline.append([x, y, int(width), int(height)])
class_ids.append(maxi_class)
confidences.append(float(confidence))
box_line = cv2.dnn.NMSBoxes(outline, confidences, 0.5, 0.3)
if len(box_line) > 0:
flat_box = box_line.flatten()
pairs = []
for i in flat_box:
(x, y) = (outline[i][0], outline[i][1])
(w, h) = (outline[i][2], outline[i][3])
x_plus_w = round(x+w)
y_plus_h = round(y+h)
label = str(self.LABELS[class_ids[i]])
color = self.COLORS[class_ids[i]]
if (showBoundingBox == True):
cv2.rectangle(frame, (x,y), (x_plus_w,y_plus_h), color, 2)
if (showLabels == True):
cv2.putText(frame, label, (x10,y10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
return frame
# Yolo
net = (None)
ln = (None)
LABELS = (None)
frameno = 0
view raw YoloV3Analyzer.py hosted with ❤ by GitHub

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – Sorting Arrays in #JavaScript, #Python and #CSharp … choose your own adventure

Buy Me A Coffee

Hi !

Not my usual set of posts, however this keep me thinking a lot. It all started with this tweet; an usual joke about JavaScript, and how it works with types.

array sorting in Javascript

I noticed that some responses focused on the “you need to know how the language works” before posting this. And it is correct, you need to know how JavaScript converts the elements into strings and then sort, to understand the crappy not so accurate output.

However, I am not a JavaScript expert, and I do not want to be one. And, when I use a tool or a programming language, I expect to have a nice learning curve; the previous example is not a nice “Welcome to JavaScript, where arrays and type conversions will make you think twice until you get it!”.

I’ve been doing a lot of Python programming, so I did the same exercise, and it worked as you may expect !

array and sort in python

I also did a similar in C#, and it also worked !

array and sort in C#

Do not get me wrong, this is not an Anti-JavaScript post. JavaScript is awesome and is almost everywhere. I know a lot of awesome JavaScript programmers, who I really admire.

However as a developer, I expect to avoid something simple like an array sort issue. I will always focus on “the code is clean and everyone will understand how it works, just read the code!”.

I am used to this as a C# developer; and I also found that Python is cool for this. But, sometimes, just sometimes, JavaScript looks like hell !

Bonus: An old post in my last JavaScript adventure.

Happy coding!

Greetings

El Bruno

#Python – Context Meny and Key Press with #PySimpleGUI

Buy Me A Coffee

Hi !

After my base sample code for a GUI app, now it’s time to add some interaction features:

  • Context Menu
  • Capture Key Press

In the following example, I’m adding a context menu with the following elements:

python pysimplegui context menu

This can be done as part of the window definition, in example

right_click_menu = ['Unused', ['&FPS', '---', 'Menu A', 'Menu B', 'Menu C', ['Menu C1', 'Menu C2'], '---', 'Exit']]

window    = sg.Window("El Bruno - Webcams and GrayScale with PySimpleGUI", layout, 
                    right_click_menu=right_click_menu,
                    no_titlebar=False, alpha_channel=1, grab_anywhere=False, 
                    return_keyboard_events=True, location=(100, 100))      

The menu definition and how to create submenus, separators, quick access keys and more are part of the PySimpleGUI documentation.

And then, in order to capture events in the window, we need to check the events read in each loop of the while. The following sample, check the window events

  • Close the Window is the user press the [X] to close the window, or click the [Exit] element on the context menu
  • Change the value of a boolean var if the user press the key [F] , or click the [FPS] element on the context menu
    # process windows events
    event, values = window.read(timeout=20)
    if event == sg.WIN_CLOSED or event == "Exit":
        break
    if event == "f" or event == "F" or event == "FPS":
        display_fps = not display_fps

The full code:

# Bruno Capuano 2020
# display the camera feed using OpenCV
# display FPS
import time
import cv2
import PySimpleGUI as sg
# init Windows Manager
sg.theme("DarkBlue")
# def webcam col
colwebcam1_layout = [[sg.Text("Camera View", size=(60, 1), justification="center")],
[sg.Image(filename="", key="cam1")]]
colwebcam1 = sg.Column(colwebcam1_layout, element_justification='center')
colwebcam2_layout = [[sg.Text("Camera View GrayScale", size=(60, 1), justification="center")],
[sg.Image(filename="", key="cam1gray")]]
colwebcam2 = sg.Column(colwebcam2_layout, element_justification='center')
colslayout = [colwebcam1, colwebcam2]
rowfooter = [sg.Image(filename="avabottom.png", key="-IMAGEBOTTOM-")]
layout = [colslayout, rowfooter]
right_click_menu = ['Unused', ['&FPS', '—', 'Menu A', 'Menu B', 'Menu C', ['Menu C1', 'Menu C2'], '—', 'Exit']]
window = sg.Window("El Bruno – Webcams and GrayScale with PySimpleGUI", layout,
right_click_menu=right_click_menu,
no_titlebar=False, alpha_channel=1, grab_anywhere=False,
return_keyboard_events=True, location=(100, 100))
# Camera Settings
camera_Width = 480 # 640 # 1024 # 1280
camera_Heigth = 320 # 480 # 780 # 960
frameSize = (camera_Width, camera_Heigth)
video_capture = cv2.VideoCapture(0)
time.sleep(2.0)
display_fps = False
while True:
start_time = time.time()
# process windows events
event, values = window.read(timeout=20)
if event == sg.WIN_CLOSED or event == "Exit":
break
if event == "f" or event == "F" or event == "FPS":
display_fps = not display_fps
# get camera frame
ret, frameOrig = video_capture.read()
frame = cv2.resize(frameOrig, frameSize)
if (display_fps == True) and (time.time() start_time ) > 0:
fpsInfo = "FPS: " + str(1.0 / (time.time() start_time)) # FPS = 1 / time to process loop
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, fpsInfo, (10, 20), font, 0.4, (255, 255, 255), 1)
# update webcam1
imgbytes = cv2.imencode(".png", frame)[1].tobytes()
window["cam1"].update(data=imgbytes)
# transform frame to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# update webcam2
imgbytes = cv2.imencode(".png", gray)[1].tobytes()
window["cam1gray"].update(data=imgbytes)
video_capture.release()
cv2.destroyAllWindows()

Happy coding!

Greetings

El Bruno

Resources

#Python – Creating GUIs with #PySimpleGUI. 2 webcams view with 50 lines of code

Buy Me A Coffee

Hi !

Working with Computer Vision is super fun. And there are some scenarios where display the step by step of the processing of an image is the best way to present this.

In most of my scenarios I use OpenCV, however for a more detailed presentation I needed to search and learn a GUI framework in Python. That’s how I get to PySimpleGUI (see references).

Note: As a long time C# dev, I’m missing XAML a lot !

PySimpleGUI is very simple (as you can expect!), and with a few lines of code we can create an UI like this one:

Let’s display the camera feed and a gray scale view of the camera feed

2 cameras in python, using PySimpleGUI to create a windows to display this

Super easy !

Let’s take a look at the code

  • Lines 16-34. This is the main window definition. The Window have 2 rows.
    • 1st row have 2 columns with the 2 cameras, each camera have their own element key
    • 2nd row have an image to display a bottom
  • Line 30. This is the final merge of the 2 rows
  • Lines 32-34. This is the window definition. We can define title, transparency, etc.
  • Lines 37-40. Window Event management, I’ll write more about this for sure. Right now, I’m only checking for window close to exit the loop.
  • Lines 51-53. Transform the camera frame to a byte array, based on the PNG format and assign the array to the 1st camera viewer.
  • Lines 55-60. Transform the camera frame to Gray Scale frame. Then transform the gray scale frame to a byte array, based on the PNG format and assign the array to the 2nd camera viewer.

Done !

# Bruno Capuano 2020
# display the camera feed using OpenCV
# display the camera feed with grayscale using OpenCV
import time
import cv2
import PySimpleGUI as sg
# Camera Settings
camera_Width = 320 # 480 # 640 # 1024 # 1280
camera_Heigth = 240 # 320 # 480 # 780 # 960
frameSize = (camera_Width, camera_Heigth)
video_capture = cv2.VideoCapture(0)
time.sleep(2.0)
# init Windows Manager
sg.theme("DarkBlue")
# def webcam col
colwebcam1_layout = [[sg.Text("Camera View", size=(60, 1), justification="center")],
[sg.Image(filename="", key="cam1")]]
colwebcam1 = sg.Column(colwebcam1_layout, element_justification='center')
colwebcam2_layout = [[sg.Text("Camera View GrayScale", size=(60, 1), justification="center")],
[sg.Image(filename="", key="cam1gray")]]
colwebcam2 = sg.Column(colwebcam2_layout, element_justification='center')
colslayout = [colwebcam1, colwebcam2]
rowfooter = [sg.Image(filename="avabottom.png", key="-IMAGEBOTTOM-")]
layout = [colslayout, rowfooter]
window = sg.Window("El Bruno – Webcams and GrayScale with PySimpleGUI", layout,
no_titlebar=False, alpha_channel=1, grab_anywhere=False,
return_keyboard_events=True, location=(100, 100))
while True:
start_time = time.time()
event, values = window.read(timeout=20)
if event == sg.WIN_CLOSED:
break
# get camera frame
ret, frameOrig = video_capture.read()
frame = cv2.resize(frameOrig, frameSize)
# if (time.time() – start_time ) > 0:
# fpsInfo = "FPS: " + str(1.0 / (time.time() – start_time)) # FPS = 1 / time to process loop
# font = cv2.FONT_HERSHEY_DUPLEX
# cv2.putText(frame, fpsInfo, (10, 20), font, 0.4, (255, 255, 255), 1)
# # update webcam1
imgbytes = cv2.imencode(".png", frame)[1].tobytes()
window["cam1"].update(data=imgbytes)
# # transform frame to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# # update webcam2
imgbytes = cv2.imencode(".png", gray)[1].tobytes()
window["cam1gray"].update(data=imgbytes)
video_capture.release()
cv2.destroyAllWindows()

Happy coding!

Greetings

El Bruno

Resources

#Python – #FastAPI Webserver sharing information from values in a different thread

Buy Me A Coffee

Hi !

After my yesterday post using Flask, I was sure that a FastAPI version will be needed, so here it goes:

I have a common scenario which involves:

  • A sensor collecting information
  • A web-server publishing the sensor information

Read my previous posts to understand why I think this is the simple way to solve this: Multi-threading.

  • Thread 1, where an infinite loop request information from the sensor, and stores the latest value to be shared.
  • Thread 2, where a web-server process requests and share the latest sensor information.

Easy ! And after a couple of tests, I manage to create a single file implementing this:

# Bruno Capuano
# simple webserver with fastapi
# run with uvicorn 07:app -reload
# test with http://127.0.0.1:8000/getdata
# on each call, validate if the thread is started,
# of the thread is None, start a different thread +1 a shared var
from typing import Optional
from fastapi import FastAPI
import threading
import time
stateThread = None
iCounter = 0
app = FastAPI()
def validateStateThread():
global stateThread
if (stateThread is None):
print(f"start thread")
stateThread = threading.Thread(target=mainSum)
stateThread.daemon = True
stateThread.start()
@app.get("/getdata")
def main():
global iCounter
validateStateThread()
t = time.localtime()
current_time = time.strftime("%H:%M:%S", t)
return str(f"{current_time} – data {iCounter}")
def mainSum():
# increment counter every second
global iCounter
while True:
iCounter = iCounter + 1
t = time.localtime()
current_time = time.strftime("%H:%M:%S", t)
print(str(f"{current_time} – data {iCounter}"))
time.sleep(1)

So at this point, you may think: why does El Bruno need this? So, let’s share an image that I’ll use in future posts:

thermal camera demo

Happy coding!

Greetings

El Bruno

Resources

#Python – Flask Webserver sharing information from values in a different thread

Buy Me A Coffee

Hi !

I have a common scenario which involves:

  • A sensor collecting information
  • A web-server publishing the sensor information

This is simple, however the sensor does not support constants requests, and it may return a “too many requests” response when called directly. The idea to get the sensor information directly in the web-request was not valid from day zero.

I asked for support / guidance and my amazing and smart friends show me the concept of OVER ENGINEERING. Dockers, Compose, Queues, Coordination and more was part of some of the proposals. However, they also show me the most easy and simple way to solve this: Multi-threading.

  • Thread 1, where an infinite loop request information from the sensor, and stores the latest value to be shared.
  • Thread 2, where a web-server process requests and share the latest sensor information.

Easy ! And after a couple of tests, I manage to create a single file implementing this:

# Bruno Capuano
# start a webserver with flask in a thread
# start a different thread +1 a shared var
from flask import Flask
import threading
import time
iCounter = 0
data = 'foo'
app = Flask(__name__)
def mainSum():
# increment counter every second
global iCounter
while True:
iCounter = iCounter + 1
t = time.localtime()
current_time = time.strftime("%H:%M:%S", t)
print(str(f"{current_time} – data {iCounter}"))
time.sleep(1)
def startWebServer():
app.run(host='0.0.0.0', port=8080)
@app.route("/getdata")
def main():
global iCounter
t = time.localtime()
current_time = time.strftime("%H:%M:%S", t)
return str(f"{current_time} – data {iCounter}")
if __name__ == "__main__":
stateThread = threading.Thread(target=mainSum)
stateThread.daemon = True
stateThread.start()
webThread = threading.Thread(target=startWebServer)
webThread.start()

So at this point, you may think: why does El Bruno need this? So, let’s share an image that I’ll use in future posts:

thermal camera demo

Note: Some very smart people also suggested to implements this using FastAPI instead of Flask, so a future post may include this.

Happy coding!

Greetings

El Bruno

Resources