#Humor – Tabs vs Spaces

Source: Somewhere in the internet

Happy coding!

Greetings

El Bruno

#Anaconda – How to List and Remove Virtual Environments (get back some disk space!)

Buy Me A Coffee

Hi !

I already shared how to create Virtual Environments using Anaconda, and also how to create shortcuts to use them directly in Windows Terminal (see references). This task is easy an amazing, however, at some point you may want to clean your environment.

That’s an easy task. I’m currently using Anaconda version 4.8.3. You can check your version with the command

conda -V

To show your virtual environments, you must use the command

conda info --envs
anaconda display version and virtual environments
Anaconda PowerShell display conda version and virtual environments

Before deleting any of this, I checked them and … they use some space.

anaconda virtual environments disk size

As you can see in the previous image

  • drone02, disk size is 2GB
  • p38, disk size is 1.4 GB
  • telloOpenCV, disk size is 2.6 GB
  • tfenv, disk size is 1.76 GB

I didn’t even check the other virtual environments. Right now I’m only using 2 from the total of 6 on the list, so I’ll delete the non used ones.

To delete a virtual environment we must use the command

conda env remove --name ENVIRONMENT

And with a simple command like this, I can remove the unused ones

conda env remove --name drone02
conda env remove --name telloOpenCV
conda env remove --name tfenv

And, after this I get some space back to my disk!

Happy coding!

Greetings

El Bruno

References

#Python – Context Meny and Key Press with #PySimpleGUI

Buy Me A Coffee

Hi !

After my base sample code for a GUI app, now it’s time to add some interaction features:

  • Context Menu
  • Capture Key Press

In the following example, I’m adding a context menu with the following elements:

python pysimplegui context menu

This can be done as part of the window definition, in example

right_click_menu = ['Unused', ['&FPS', '---', 'Menu A', 'Menu B', 'Menu C', ['Menu C1', 'Menu C2'], '---', 'Exit']]

window    = sg.Window("El Bruno - Webcams and GrayScale with PySimpleGUI", layout, 
                    right_click_menu=right_click_menu,
                    no_titlebar=False, alpha_channel=1, grab_anywhere=False, 
                    return_keyboard_events=True, location=(100, 100))      

The menu definition and how to create submenus, separators, quick access keys and more are part of the PySimpleGUI documentation.

And then, in order to capture events in the window, we need to check the events read in each loop of the while. The following sample, check the window events

  • Close the Window is the user press the [X] to close the window, or click the [Exit] element on the context menu
  • Change the value of a boolean var if the user press the key [F] , or click the [FPS] element on the context menu
    # process windows events
    event, values = window.read(timeout=20)
    if event == sg.WIN_CLOSED or event == "Exit":
        break
    if event == "f" or event == "F" or event == "FPS":
        display_fps = not display_fps

The full code:

# Bruno Capuano 2020
# display the camera feed using OpenCV
# display FPS
import time
import cv2
import PySimpleGUI as sg
# init Windows Manager
sg.theme("DarkBlue")
# def webcam col
colwebcam1_layout = [[sg.Text("Camera View", size=(60, 1), justification="center")],
[sg.Image(filename="", key="cam1")]]
colwebcam1 = sg.Column(colwebcam1_layout, element_justification='center')
colwebcam2_layout = [[sg.Text("Camera View GrayScale", size=(60, 1), justification="center")],
[sg.Image(filename="", key="cam1gray")]]
colwebcam2 = sg.Column(colwebcam2_layout, element_justification='center')
colslayout = [colwebcam1, colwebcam2]
rowfooter = [sg.Image(filename="avabottom.png", key="-IMAGEBOTTOM-")]
layout = [colslayout, rowfooter]
right_click_menu = ['Unused', ['&FPS', '—', 'Menu A', 'Menu B', 'Menu C', ['Menu C1', 'Menu C2'], '—', 'Exit']]
window = sg.Window("El Bruno – Webcams and GrayScale with PySimpleGUI", layout,
right_click_menu=right_click_menu,
no_titlebar=False, alpha_channel=1, grab_anywhere=False,
return_keyboard_events=True, location=(100, 100))
# Camera Settings
camera_Width = 480 # 640 # 1024 # 1280
camera_Heigth = 320 # 480 # 780 # 960
frameSize = (camera_Width, camera_Heigth)
video_capture = cv2.VideoCapture(0)
time.sleep(2.0)
display_fps = False
while True:
start_time = time.time()
# process windows events
event, values = window.read(timeout=20)
if event == sg.WIN_CLOSED or event == "Exit":
break
if event == "f" or event == "F" or event == "FPS":
display_fps = not display_fps
# get camera frame
ret, frameOrig = video_capture.read()
frame = cv2.resize(frameOrig, frameSize)
if (display_fps == True) and (time.time() start_time ) > 0:
fpsInfo = "FPS: " + str(1.0 / (time.time() start_time)) # FPS = 1 / time to process loop
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, fpsInfo, (10, 20), font, 0.4, (255, 255, 255), 1)
# update webcam1
imgbytes = cv2.imencode(".png", frame)[1].tobytes()
window["cam1"].update(data=imgbytes)
# transform frame to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# update webcam2
imgbytes = cv2.imencode(".png", gray)[1].tobytes()
window["cam1gray"].update(data=imgbytes)
video_capture.release()
cv2.destroyAllWindows()

Happy coding!

Greetings

El Bruno

Resources

#Python – Creating GUIs with #PySimpleGUI. 2 webcams view with 50 lines of code

Buy Me A Coffee

Hi !

Working with Computer Vision is super fun. And there are some scenarios where display the step by step of the processing of an image is the best way to present this.

In most of my scenarios I use OpenCV, however for a more detailed presentation I needed to search and learn a GUI framework in Python. That’s how I get to PySimpleGUI (see references).

Note: As a long time C# dev, I’m missing XAML a lot !

PySimpleGUI is very simple (as you can expect!), and with a few lines of code we can create an UI like this one:

Let’s display the camera feed and a gray scale view of the camera feed

2 cameras in python, using PySimpleGUI to create a windows to display this

Super easy !

Let’s take a look at the code

  • Lines 16-34. This is the main window definition. The Window have 2 rows.
    • 1st row have 2 columns with the 2 cameras, each camera have their own element key
    • 2nd row have an image to display a bottom
  • Line 30. This is the final merge of the 2 rows
  • Lines 32-34. This is the window definition. We can define title, transparency, etc.
  • Lines 37-40. Window Event management, I’ll write more about this for sure. Right now, I’m only checking for window close to exit the loop.
  • Lines 51-53. Transform the camera frame to a byte array, based on the PNG format and assign the array to the 1st camera viewer.
  • Lines 55-60. Transform the camera frame to Gray Scale frame. Then transform the gray scale frame to a byte array, based on the PNG format and assign the array to the 2nd camera viewer.

Done !

# Bruno Capuano 2020
# display the camera feed using OpenCV
# display the camera feed with grayscale using OpenCV
import time
import cv2
import PySimpleGUI as sg
# Camera Settings
camera_Width = 320 # 480 # 640 # 1024 # 1280
camera_Heigth = 240 # 320 # 480 # 780 # 960
frameSize = (camera_Width, camera_Heigth)
video_capture = cv2.VideoCapture(0)
time.sleep(2.0)
# init Windows Manager
sg.theme("DarkBlue")
# def webcam col
colwebcam1_layout = [[sg.Text("Camera View", size=(60, 1), justification="center")],
[sg.Image(filename="", key="cam1")]]
colwebcam1 = sg.Column(colwebcam1_layout, element_justification='center')
colwebcam2_layout = [[sg.Text("Camera View GrayScale", size=(60, 1), justification="center")],
[sg.Image(filename="", key="cam1gray")]]
colwebcam2 = sg.Column(colwebcam2_layout, element_justification='center')
colslayout = [colwebcam1, colwebcam2]
rowfooter = [sg.Image(filename="avabottom.png", key="-IMAGEBOTTOM-")]
layout = [colslayout, rowfooter]
window = sg.Window("El Bruno – Webcams and GrayScale with PySimpleGUI", layout,
no_titlebar=False, alpha_channel=1, grab_anywhere=False,
return_keyboard_events=True, location=(100, 100))
while True:
start_time = time.time()
event, values = window.read(timeout=20)
if event == sg.WIN_CLOSED:
break
# get camera frame
ret, frameOrig = video_capture.read()
frame = cv2.resize(frameOrig, frameSize)
# if (time.time() – start_time ) > 0:
# fpsInfo = "FPS: " + str(1.0 / (time.time() – start_time)) # FPS = 1 / time to process loop
# font = cv2.FONT_HERSHEY_DUPLEX
# cv2.putText(frame, fpsInfo, (10, 20), font, 0.4, (255, 255, 255), 1)
# # update webcam1
imgbytes = cv2.imencode(".png", frame)[1].tobytes()
window["cam1"].update(data=imgbytes)
# # transform frame to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# # update webcam2
imgbytes = cv2.imencode(".png", gray)[1].tobytes()
window["cam1gray"].update(data=imgbytes)
video_capture.release()
cv2.destroyAllWindows()

Happy coding!

Greetings

El Bruno

Resources

#Podcast – NTN 61 – Todo lo que pasa cuando haces tap con una tarjeta electrónica: millones de transacciones, programacion de bajo nivel, hardware mucho hardware, seguridad en menos de un segundo y mucho más!

Buy Me A Coffee

Buenas!

Hoy tenemos la suerte de hablar con Ariel Perassi, Germán Pagliaroli y Juan Quijano sobre todo lo que pasa en ese segundo mágico que hay cuando haces Tap con una tarjeta electrónica. Empezamos con el hardware de bajo nivel que existe para soportar esto y la novedosa solución que han implementado para soportar diversos dispositivos. Este proceso les ha permitido cubrir +40 ciudades, +80 millones de transacciones diarias que cubren desde viajes en tren, autobuses o botes, y mucho más.

Por cierto, este proceso son más de 10 años de experiencia trabajando con tarjetas de pago; han levantado el nivel de negocio y técnico de la conversación hasta un punto impresionante !!

Speakers

  • Ariel Perassi es Sub Gerente de Producto SUBE en Nación Servicios (LinkedIn)
  • Germán Pagliaroli es Jefe Arquitectura de Software SUBE en Nación Servicios (LinkedIn)
  • Juan Quijano es Microsoft Certified Trainer, Arquitecto de Soluciones en Azure, Consultor independiente en implantación de DevOps (LinkedIn)
  • Bruno Capuano es Canada Innovation Lead at Avanade and Microsoft AI MVP (LinkedIn)

Ir a descargar

Happy coding!

Greetings

El Bruno

#Personal – USB Hub with On/Off switch, one of the best ideas ever !

Buy Me A Coffee

Hi !

Every single day I have connected to my main PC at least 5 or more different USB devices. In example:

  • Logitech Brio WebCam
  • SENNHEISER SP 30 Speakerphone
  • USB BT and Wifi Dongle
  • and more…

And, depending on my current work, I also connect and disconnect extra devices. These days, I’m playing with Thermal Camera, so an extra camera, a Raspberry Pi and other devices are part of the pack.

Thermal detection as soon as a person is detected

So, several devices connected and, I also think about security. How a connected camera 24×7 may be an entry point for an attach. My Logitech Brio has a nice lid to cover the entry, but the old LifeCam is just a plain camera.

Microsoft LifeCam and Thermal Camera

And, these cameras also have microphones, so that’s another entry point that I can hardly connect. I mean, I can open my Device Manager settings and disable each device, however this is not an easy and pleasant task to do.

This is my device manager configuration at any time of the day for my Audio Devices, and I only need the selected 2.

Audio Devices in Windows 10
Audio Devices in Windows 10

I created a couple of PowerShell scripts to disable the extra devices, however these are not very reliable and in random times, I get an amazing Green Screen Of Death (Windows Insider version of BSOD)

windows 10 insider blue screen of the dead
Windows 10 Insider G Screen Of Dead

After doing some digging I found what is the best solution for my pain

An USB Hub with On/Off switchs for each entry

Usb hub for PC with also chargers

As you can see on the image, I have the USB with several connected devices to my PC (blue lights) and I’m also using one of the Power entries to power one Raspberry Pi (red lights).

USB Hub Ports

The USB hub also have an optional 12V/4A power adapter that can provide charging power up to 2.4A.

And, this is not an sponsored post, however I’m so happy to found a simple solution to enable / disable devices that I think it will make sense to share !

Happy coding!

Greetings

El Bruno

References

Conociendo Azure Cosmos DB [Curso]

My friend Vicente and this awesome post about CosmosDB

Vicente G. Guzmán Lucio

Estamos viviendo hoy en día tiempos difíciles, eventos inesperados como el Covid-19 hacen que tengamos que comportarnos de una manera diferente. Sectores que no se veían obligados a estar montados en Internet, hoy deben de adaptarse a la brevedad para no desaparecer. Y es que ya vivimos en la era del cómputo en la nube, donde los servicios ofrecidos están soportados por una poderosa infraestructura lista para cualquier tipo de desastre

Por ejemplo, el contar con una base de datos distribuida como Azure Cosmos DB te permitirá sobrevivir ante acontecimientos no esperados como el que estamos viviendo. Microsoft Azure cuenta con más de 60 regiones por todo el mundo en 140 países.

¿No estas convencido? Velo por ti mismo en este gráfico:

Image

Cabe destacar que las bases de datos NoSQL son ampliamente utilizadas hoy en día, y las podemos definir de la siguiente manera:
NoSQL hace referencia a…

View original post 589 more words

#Event – Resources used with @ivanatilca during the “Lessons Learned creating a multiplatform AI project for Azure Kinect and Hololens 2” for the Global XR Talks

Buy Me A Coffee

Hi !

We had an amazing time last week with Ivana at the Global XR Talks, sharing some of our lessons learned creating an Mixed Reality app to work in Hololens 2 and Azure Kinect.

As usual, now it’s time for slides and code

Slides

Code

The main scripts for Camera and Cognitive Services interaction are available here https://github.com/elbruno/events/tree/main/20200806%20Global%20XR%20HL2%20to%20Azure%20Kinect%20Lessons%20Learned

Session Recording

Resources

#Unity3D – Making a CustomVision.ai HTTP Post call to have a better #MRTK experience with #CognitiveServices @ivanatilca

Buy Me A Coffee

Hi !

Quick post today, with mostly sample code. And, it’s all about a scenario that we faced with Ivana a couple of days ago while we were using MRTK and we were trying to use some Cognitive Services.

As of today, not all the services in Cognitive Services are supported and have official Unity3D support. At the end, it’s not a problem, we can just make an HTTP Post call, and that’s it. However, this is not as easy as is supposed to be.

So, after facing some issues with the System.Net.HttpClient library, I decided to use UnityWebRequest. This library main objective is to work with HTTP Forms, however we can manage to send an image with a sample like this:

string DetectImage(byte[] image, string imageUrl)
{
string body = string.Empty;
using (var request = UnityWebRequest.Post(imageUrl, ""))
{
request.SetRequestHeader("Content-Type", "application/octet-stream");
request.uploadHandler = new UploadHandlerRaw(image);
request.SendWebRequest();
while (request.isDone == false)
{
var wfs = new WaitForSeconds(1);
}
if (request.isNetworkError || request.isHttpError)
{
Debug.Log(request.error);
}
else
{
body = request.downloadHandler.text;
}
}
return body;
}

As we can see in the previous post, there is no async / await support here. So I added a couple of ugly lines of code to support this. We can improve this to have a timeout or delay applied here. As for this sample, this works great.

As a bonus, you can watch the full presentation in the Global XR YouTube channel here

Happy coding!

Greetings

El Bruno

References

Crear aplicaciones en tiempo real con Blazor y SignalR

Mteheran's Blog

Blazor es uno de los frameworks para Web Assembly mas avanzados del momento y por otra parte SignalR es una librería ligera y fácil de implementar, que nos permite incorporar comunicación en tiempo real dentro de nuestras apps, en este post aprenderemos como combinar estas 2 tecnologías.

Como es usual lo primero que debemos hacer es instalar la librería de SignalR y configurar nuestro hub:

Podemos hacerlo desde el gestor de NuGets con Visual Studio o podemos utilizar el comando en la consola:

Install-Package Microsoft.AspNetCore.SignalR -Version 1.1.0

Debemos configurar SignalR de una manera general en el proyecto agregando el middleware y utilizando services.AddSignalR(); y endpoints.MapHub(“/chat”); en Startup.cs

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddControllers();
        services.AddSignalR();

        services.AddResponseCompression(opts =>
        {
            opts.MimeTypes = ResponseCompressionDefaults.MimeTypes.Concat(
                new[] { "application/octet-stream" });
        });

        services.AddCors(options => options.AddPolicy("CorsPolicy", builder =>
        {
            builder
                .AllowAnyMethod()
                .AllowAnyHeader()
                .AllowAnyOrigin();
        }));
    }

 public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { if (env.IsDevelopment()) {…

View original post 477 more words