#Podcast – NTN 57 – DevOps, GitHub Actions, cultura de DevOps, estrategias de despliegues y mucho más! @snavarropino @jc_quijano,

Buy Me A Coffee
Screen shot of podcast recording

Buenas!

Hoy tenemos la suerte de hablar con Sergio Navarro Pino (@snavarropino) y Juan Carlos Quijano (@jc_quijano) sobre varios temas interesantes. Empezamos hablando de DevOps, orientado a experiencias y cultura. Como somos muy frikis obviamente terminado hablando de Azure DevOps, GitHub Actions, estrategias de Testing, Blue / Green deployments y mucho más!

Speakers

  • Sergio Navarro Pino es Tech Lead en Analyticalways (LinkedIn)
  • Juan Carlos Quijano Abad es Microsoft Certified Trainer, Arquitecto de Soluciones en Azure, Consultor independiente en implantación de DevOps (LinkedIn)
  • Bruno Capuano es Canada Innovation Lead at Avanade and Microsoft AI MVP (LinkedIn)

Happy coding!

Greetings

El Bruno

#RaspberryPi – Performance differences in #FaceRecognition using #OpenVino (code with @code!)

Buy Me A Coffee

Hi !

I’ve been looking to use the amazing Intel Neural Stick 2 for a while, and one of the 1st ideas that I have was to check how fast my Raspberry Pi 4 can run using this device.

The Intel team released a nice step by step process installation for Raspberry Pi. And it works great, there are a couple of minor glitches that you need to figure out, like the latest package version, everything else works great.

Note: I downloaded my openvino toolkit from here (https://download.01.org/opencv/2019/openvinotoolkit/R3/), and the downloaded file is (l_openvino_toolkit_runtime_raspbian_p_2019.3.334.tgz).

Once installed, the 1st python sample is a face recognition one. This sample analyzes a image file using OpenCV to detect faces, and creates a new output file with the detected images. As I said, is very straight forward.

So, I decided to create a new python sample to run live face detection using the camera feed and also display the FPS. This is the output code:

# perform face detection
# display detected face frame
# display FPS info in webcam video feed
# This is the official sample demo file desribed in the installer documentation
# Date: 2020 01 26
# Install OpenVINO™ toolkit for Raspbian* OS
# http://docs.openvinotoolkit.org/2019_R1/_docs_install_guides_installing_openvino_raspbian.html
import cv2
import time
import imutils
# Load the model.
net = cv2.dnn.readNet('face-detection-adas-0001.xml',
'face-detection-adas-0001.bin')
# Specify target device.
# ERROR net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
# OK net.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
# ERROR net.setPreferableBackend(cv2.dnn.DNN_BACKEND_HALIDE)
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)
# open video frame
video_capture = cv2.VideoCapture(0)
while True:
start_time = time.time()
ret, frame = video_capture.read()
# frame resize to improve performance
frame = imutils.resize(frame, width=648, height=480)
rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Prepare input blob and perform an inference.
blob = cv2.dnn.blobFromImage(rgb_frame, size=(640, 480), ddepth=cv2.CV_8U)
net.setInput(blob)
out = net.forward()
# Draw detected faces on the frame.
for detection in out.reshape(1, 7):
confidence = float(detection[2])
xmin = int(detection[3] * frame.shape[1])
ymin = int(detection[4] * frame.shape[0])
xmax = int(detection[5] * frame.shape[1])
ymax = int(detection[6] * frame.shape[0])
if confidence > 0.5:
cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), color=(0, 255, 0))
#display FPS
fpsInfo = "FPS: " + str(1.0 / (time.time() start_time)) # FPS = 1 / time to process loop
print(fpsInfo)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, fpsInfo, (10, 20), font, 0.4, (255, 255, 255), 1)
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_capture.release()
cv2.destroyAllWindows()

The code is very straight forward and the main matters are

  • It uses 2 models from the Intel Zoo to perform the face detection: face-detection-adas-0001.xml and face-detection-adas-0001.bin
  • Lines 22 and 23 are key to define that OpenCV will load and use the models in the Intel device
  • I use imutils to resize the image to 640×480. Feel free to use any other library for this, even OpenCV
  • Also, it works also with smaller resolutions, however 640×480 is good for this demo

And the final app running analyzing almost 8 frames per second (8 FPS).

Which is almost 10 times faster that the 0.7 FPS without Intel NCS2

And, I already wrote about running Visual Studio Code in the Raspberry Pi (see references) is an amazing experience. I did all my Python in VSCode coding remote accesing my device via VNC. Python runs like a charm!

You can download the code from https://github.com/elbruno/rpiopenvino/tree/master/facedetection

References

My posts on Raspberry Pi

Dev posts for Raspberry Pi
Tools and Apps for Raspberry Pi
Setup the device
Hardware

#RaspberryPi – Visual Studio Code blank screen is fixed on version 1.42. Thanks @headmelted! @code

Buy Me A Coffee

Hi!

When I wrote about how to install Visual Studio Code on a Raspberry Pi, there was an error on the app running on the device: the tool opened and display a black window.

We needed to manually go back to version 1.29 to avoid this problem. And we also forgot to apply any updates, each new version still have the black window error.

I was following the issue on GitHub, and I was happy when a few days ago, it was solved by headmelted (link https://github.com/headmelted/codebuilds/issues/67#issuecomment-563335451).

I’ve updated the app to the latest version (1.42.0), and it was working fine.

raspberry pi 4 visual studio code version 1.42 running fine

If we check the apt list, we can find a specific detail of the running version with the command

sudo apt list --installed | grep -i code-oss
raspberry pi 4 apt list for code -oss return 1.42

So, please go and thank headmelted by the amazing work he is doing!

Happy coding!

Greetings

El Bruno

My posts on Raspberry Pi

Dev posts for Raspberry Pi
Tools and Apps for Raspberry Pi
Setup the device
Hardware

#CognitiveServices – Easy lines to convert CSV to JSON to be used on the #AnomalyDetector service

Buy Me A Coffee

Hi!

After the event “Building an Anomaly Detector System with a few or no lines of code” at MsftReactor, some people asked for the 2 lines that I used to convert a CSV file to JSON, to be used with Cognitive Services Anomaly Detector, so here they are.

static string CsvToJson(string csvPath, string granularity = "daily", bool hasHeaders = true)
{
char[] fieldSeparator = { ',' };
var lines = System.IO.File.ReadAllLines(csvPath);
// remove header
if (hasHeaders)
lines = lines.Skip(1).ToArray();
// build series
var arraySeries = new JArray();
foreach (var line in lines)
{
if (string.IsNullOrEmpty(line)) continue;
var fields = line.Split(fieldSeparator);
var jsonSerie = new JObject
{
["timestamp"] = fields[0],
["value"] = fields[1]
};
arraySeries.Add(jsonSerie);
}
var jobjectMain = new JObject
{
["granularity"] = granularity,
["series"] = arraySeries,
};
var jsonComplete = new JArray {jobjectMain};
return jsonComplete.ToString();
}

Important: you need Newtonsoft.Json to build the json content.

The input CSV is part of the Machine Learning.Net sample data, and has this sample content:

Month,ProductSales
1-Jan,271
2-Jan,150.9
3-Jan,188.1
...

And as a bonus, the full console project can be downloaded from here.

https://github.com/elbruno/Blog/tree/master/20191125%20CSV%20to%20JSON%20for%20Anomaly%20Detector

This project also have a second function which creates the JSON content using simple string, without the need of Newtonsoft.Json.

Happy coding!

Greetings @ Burlingon

El Bruno

References

#Event – Materials and Resources used during my #CustomVision and #AI session at #CDC2019

Hi!

Drafting these in the airplane, and also drafting a bigger post about the full and amazing experience at the Caribbean Developer Conference. So, I’ll start with the usual slides and materials, and also use this post later as reference for the full experience

Slides

Code

https://github.com/elbruno/events/tree/master/2019%2010%2004%20CDC

Links

Tweets

Greetings @ Toronto

El Bruno

References

My posts on Raspberry Pi

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (4/N)

Buy Me A Coffee

Hi!

Quick post today. I’ll pickup yesterday demo, showing the facial features and adding Face Recognition on top of that. In other words, we’ll move from this

To this

With a couple of extra lines for face recognition

import face_recognition
import cv2
import numpy as np
def LoadFaces():
bruno_image = face_recognition.load_image_file("d:\Faces\Bruno1.jpg")
bruno_face_encoding = face_recognition.face_encodings(bruno_image)[0]
valentino_image = face_recognition.load_image_file("d:\Faces\Valen1.jpg")
valentino_face_encoding = face_recognition.face_encodings(valentino_image)[0]
known_face_encodings = [
bruno_face_encoding,
valentino_face_encoding
]
known_face_names = [
"Bruno",
"Valentino"
]
return known_face_encodings, known_face_names;
video_capture = cv2.VideoCapture(0)
known_face_encodings, known_face_names = LoadFaces()
while True:
ret, frame = video_capture.read()
rgb_frame = frame[:, :, ::1]
face_landmarks_list = face_recognition.face_landmarks(rgb_frame)
for face_landmarks in face_landmarks_list:
for facial_feature in face_landmarks.keys():
pts = np.array([face_landmarks[facial_feature]], np.int32)
pts = pts.reshape((1,1,2))
cv2.polylines(frame, [pts], False, (0,255,0))
face_locations = face_recognition.face_locations(rgb_frame)
face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)
for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
name = "Unknown"
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
best_match_index = np.argmin(face_distances)
if matches[best_match_index]:
name = known_face_names[best_match_index]
#cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
cv2.rectangle(frame, (left, bottom), (right, bottom), (0, 255, 0), cv2.FILLED)
font = cv2.FONT_HERSHEY_COMPLEX_SMALL
cv2.putText(frame, name, (left + 6, bottom 6), font, 0.7, (255, 255, 255), 1)
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_capture.release()
cv2.destroyAllWindows()

There is some room for performance improvement, so I’ll focus on this in next posts.

The complete project is available here https://github.com/elbruno/Blog/tree/master/20190528%20Python%20FaceRecognition

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

My Posts

  1. Detecting Faces with 20 lines in Python
  2. Face Recognition with 20 lines in Python
  3. Detecting Facial Features with 20 lines in Python

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (3/N)

Buy Me A Coffee

Hi!

In my previous posts I explained how to detect faces and perform face recognition in python. Today I’ll explore another feature in the face_recognition package: Find Facial Features.

The live camera output will be something like this:

Note: Special thanks to my daughter who is always OK to help me with this.

The main sample in the source code uses a photo to detect facial features and creates a new one with the features detected. In the following sample, is amazing to check that it detect a far away face behind the main ones and also, somehow, it detect some landmarks behind my girl glasses:

I wanted to see how fast this library work to perform this with a live camera feed, and the results are very good.

I spend sometime figuring out the best way to draw lines with OpenCV, at the end the PolyLine() function is the one doing all the magic (lines 14 to 17). It took me sometime, to find the best way to deal with matrix transformations and some other performance tricks, but at the end I get this up and running in 25 lines which is kind of amazing. And the final code is very simple:

import face_recognition
import cv2
import numpy as np
video_capture = cv2.VideoCapture(0)
while True:
ret, frame = video_capture.read()
rgb_frame = frame[:, :, ::1]
face_landmarks_list = face_recognition.face_landmarks(rgb_frame)
for face_landmarks in face_landmarks_list:
for facial_feature in face_landmarks.keys():
pts = np.array([face_landmarks[facial_feature]], np.int32)
pts = pts.reshape((1,1,2))
cv2.polylines(frame, [pts], False, (0,255,0))
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_capture.release()
cv2.destroyAllWindows()

The complete project is available here https://github.com/elbruno/Blog/tree/master/20190528%20Python%20FaceRecognition

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

My Posts

  1. Detecting Faces with 20 lines in Python
  2. Face Recognition with 20 lines in Python

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (2/N)

Buy Me A Coffee

Hi!

Yesterday I explained how to write a couple of lines in Python to perform live face detection in a webcam feed [Post]. Check the resources section to find more about the tools I’m using.

Today, I’ll add some more code to perform face recognition. And as usual, I’ll work with my kids to test this out. I’ll start adding 2 face encodings for Valentino and myself. The code is simple enough, and I use a simple 300×300 head-shot photo to train and get the face encoding.

def LoadFaces():
bruno_image = face_recognition.load_image_file("d:\Faces\Bruno1.jpg")
bruno_face_encoding = face_recognition.face_encodings(bruno_image)[0]
valentino_image = face_recognition.load_image_file("d:\Faces\Valen1.jpg")
valentino_face_encoding = face_recognition.face_encodings(valentino_image)[0]
known_face_encodings = [
bruno_face_encoding,
valentino_face_encoding
]
known_face_names = [
"Bruno",
"Valentino"
]
return known_face_encodings, known_face_names;

The previous function returns an set of arrays with the face encodings and the face names. In the complete file, I’ll use this to analyze the camera frame (line 31) and later to check the matches for faces (lines 34 * 36)

import face_recognition
import cv2
import numpy as np
def LoadFaces():
bruno_image = face_recognition.load_image_file("d:\Faces\Bruno1.jpg")
bruno_face_encoding = face_recognition.face_encodings(bruno_image)[0]
valentino_image = face_recognition.load_image_file("d:\Faces\Valen1.jpg")
valentino_face_encoding = face_recognition.face_encodings(valentino_image)[0]
known_face_encodings = [
bruno_face_encoding,
valentino_face_encoding
]
known_face_names = [
"Bruno",
"Valentino"
]
return known_face_encodings, known_face_names;
video_capture = cv2.VideoCapture(0)
known_face_encodings, known_face_names = LoadFaces()
while True:
ret, frame = video_capture.read()
rgb_frame = frame[:, :, ::1]
face_locations = face_recognition.face_locations(rgb_frame)
face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)
for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
name = "Unknown"
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
best_match_index = np.argmin(face_distances)
if matches[best_match_index]:
name = known_face_names[best_match_index]
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
cv2.rectangle(frame, (left, bottom 25), (right, bottom), (0, 0, 255), cv2.FILLED)
font = cv2.FONT_HERSHEY_COMPLEX_SMALL
cv2.putText(frame, name, (left + 6, bottom 6), font, 0.7, (255, 255, 255), 1)
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_capture.release()
cv2.destroyAllWindows()

Last lines are cosmetic to mostly draw the frames for the detected faces, and show the names.

The complete project is available here https://github.com/elbruno/Blog/tree/master/20190521%20Python%20FaceRecognition

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

My Posts

#Windows10 – Windows #VisionSkills sample UWP App

Hi!

Yesterday the Windows Team announced the preview version of Windows Vision Skills. So today I was browsing the samples in Github and I’ve created a simplified version of the Skeleton tracker using a live feed from a webcam.

Here are some notes about my GitHub sample

  • The UWP App must be Windows 10 version 1809
  • I added the NuGet packages [Microsoft.AI.Skills.Vision.SkeletalDetectorPreview] and [Microsoft.Toolkit.Uwp.UI.Controls]
  • The MainView uses the CameraPreview control from the [Microsoft.Toolkit.Uwp.UI.Controls] toolkit.
  • Each frame is processed and I use a SkeletalBinding to detect Skeletons / bodies
  • The core detection is performed here
        private async Task RunSkillAsync(VideoFrame frame, bool isStream)
        {
            m_evalPerfStopwatch.Restart();

            // Update input image and run the skill against it
            await m_skeletalDetectorBinding.SetInputImageAsync(frame);
            await m_skeletalDetectorSkill.EvaluateAsync(m_skeletalDetectorBinding);

            m_evalPerfStopwatch.Stop();
            m_skeletalDetectionRunTime = m_evalPerfStopwatch.ElapsedMilliseconds;

            await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
            {
                m_bodyRenderer.Update(m_skeletalDetectorBinding.Bodies, !isStream);
                m_bodyRenderer.IsVisible = true;
                UISkillOutputDetails.Text = $"Found {m_skeletalDetectorBinding.Bodies.Count} bodies (took {m_skeletalDetectionRunTime} ms)";
            });
        }
  • There is also a BodyRenderer.cs class used to draw the skeletons on top of the CameraPreview Image control. It draws lines in an empty canvas.

You can download the sample code from here https://github.com/elbruno/Blog/tree/master/20190501%20VisionSkills%20Skeleton%20Sample

Greetings @ Burlington

El Bruno

References

#AI – Multi-Task Deep Neural Networks para Natural Language Understanding

goku at keyboard

Buenas !

Un equipo de Microsoft Research ha lanzado una nueva Deep Neural Network para el aprendizaje de universal language embbedings: Multi-Task Deep Neural Networks for Natural Language Understanding (MT-DNN).  Language embbedings es un proceso que se utiliza para asignar elementos en sentencias o párrafos a representaciones vectoriales. Esto se utiliza principalmente en herramientas como LUIS (Language Understanding) para analizar texto e identificar intenciones, entidades y más.

Una descripción completa de MT-DNN se puede encontrar en el blog oficial de Microsoft Research (ver referencias). Me pareció interesante que el enfoque es añadir un modelo de lenguaje transformador bidireccional pre-entrenado, conocido como BERT, desarrollado por Google AI.

Como de costumbre, el código, basado en PyTorch, está disponible en GitHub: https://github.com/namisan/MT-DNN. El repositorio contiene los modelos previamente entrenados, el código fuente y el archivo Léame que describe paso a paso cómo reproducir los resultados notificados en el MT-DNN paper,

Más información: Towards universal language embeddings

Saludos @ Toronto

El Bruno

References