#Personal – We have a “Learn to pronounce” feature in Google !

Buy Me A Coffee

Hi !

Some days ago, while I was searching for the meaning of a word, I found an amazing feature in Google:

Learn to pronounce

learn to pronounce feature in Google results

And it’s very basic and amazing at the same time.

  • You search for a definition of a word
  • In the result area, you get the speaker button, who reproduces the word
  • Once you play the word, a new feature will be available: [Learn to Pronounce]
  • This will open a new section, with the mouth movements for the American and British pronunciation

That’s it, super useful ! I could not find a lot of related information about this. It seems that it’s been around for over 2 years, based on these The Verge articles:

Besides some great AI in the back, this is so amazing 😀

Happy coding!


El Bruno




#VS2019 – ML.NET Model Builder training using GPU, CPU and … Azure !

Buy Me A Coffee

Hi !

I my previous posts on Model Builder, I show the differences between using the CPU and GPU to train a model. There is a 3rd option, which involves an Azure ML Experiment, and performs the training on the cloud.

It took me some time, to setup this environment, mostly because I tried to use an existing Azure Compute Instance that I already have, and Model Builder needs a Compute Cluster.

Here is also important to remark, that you need to create a Dedicated GPU based Compute Cluster. There are some expenses / costs associated to these resources, so make your numbers before you start.

And, here we go, now we can move forward with the Model Builder Assistant.

I made some tests using a small image data set, and it was awesome. Training an 24 images dataset took between 8 and 9 minutes. and the results were very good. A good insight here, is the chance to get more details directly in the Azure Machine Learning portal.

We can go deep in each experiment, and take a look at some metrics like F1 Score, Precision, Recall and more.

Each Model Builder Image Classification project, will trigger several Azure ML Experiments.

  • Automated ML
  • HyperDrive
  • Preparation
  • Script

The Script experiment is the one we can open to get access to some detailed logs, and also to the ONNX model.

So, I decided to go big and test this using the set of images from a Kaggle challenge [State Farm Distracted Driver Detection] (see references). This is a 1GB image set, 22424images, with ten categories.

The 1st step is to upload the 22424 images to an Azure resource, this took some time.

And then, start tracking the progress in the Azure Machine Learning portal.

And after some time, the process triggered a timeout.

The details on the 4 experiments suggests that some limit was exceeded. I’m not sure if from the IDE or on the Azure side.

However, the experiment in charge to train the model [Run 12], got some sucess models. Acuraccy, F1 and precision were getting better

Reading some log I can see how the error was triggered on Epoch 8. I need to spend more time here to figure out what’s happened !

Note: I already reported the issue to the GitHub Repo.

As final thought. Using Azure as the training environment in Model Builder is an amazing option. A big DataSet maybe a problem, or maybe my quota is the problem. Anyways, in smaller DataSets it worked great. I’ll keep an eye in this issue and update the blog with some news.

Happy coding!


El Bruno


#MSIgnite – FREE Microsoft Certification Voucher as part of the Cloud Skills Challenge

Buy Me A Coffee

Hi !

Next week we will have the 1st virtual Ignite. I’m lucky enough to participate in a great AI session and while I’m building my agenda, I also realize that we have a Free Certification Voucher program, similar to the one that we had at Build 2020.

Once you are registered to the event, you need to sign to the Microsoft Ignite Cloud Skills Challenge.

Participate in the Microsoft Ignite Cloud Skills Challenge to earn a free Microsoft Certification exam and a chance to win prizes! Choose up to six different challenges to complete on Microsoft Learn. After you’ve completed your first challenge, you’ll earn a free Microsoft Certification exam. For each challenge you complete, you’ll earn five entries into a sweepstakes. One grand prize winner will receive a chance to meet with a Microsoft leader and invite up to four friends.

Microsoft Ignite Cloud Skills Challenge

Another important details is this one

The free certification exam is available for redemption starting October 12, 2020 and is valid until March 31, 2021. If you do not schedule and take your eligible exam within that timeframe, you will not be able to redeem your offer.
Additional details and instructions on how to redeem your free certification exam will be available starting October 12, 2020.

Microsoft Ignite Cloud Skills Challenge 2020: Free Certification Exam

So, now you can wait for the certification voucher and choose one exam from the Exams list.

Awesome !

Happy coding!


El Bruno


#YoloV3 – Rapid Object detection with 5 lines of code ( @code and #Python for the win! )

Buy Me A Coffee

Hi !

Sometime ago, I wrote a simple Python class a wrapper for YoloV3. This allows me to write 5 lines of code to analyze an image. Once I use this class, I only press F5 in Visual Studio Code and it’s magic:

detecting images with 3 lines of code

A sample usage will be similar to this one. The only required parameter for the YoloV3 analyzer is the confidence. 0.5 is good enough for this demo:

# Bruno Capuano 2020
# simple implementation for YoloV3 object detection
import cv2
from yoloanalyzer import yoloV3_analyzer
image_path = "02.jpg"
# init Analyzer with confidence 50%
ya = yoloV3_analyzer(0.5)
# analyze and show image
image = cv2.imread(image_path)
newImage = ya.ImageProcess(image)
cv2.imshow("Rapid YoloV3 demo", newImage)
# wrap up

And the output is rocking, not only cats, also dogs and humans !

I added a couple of parameters to define if we want labels and bounding boxes in the output image. The same image without bounding boxes will be:

And finally, the main class to perform this. It’s a very simple one, and feel free to use it and remember that you must check the official YoloV3 repository to get the files:

  • coco.names
  • yolov3.cfg
  • yolov3.weights
# Bruno Capuano 2020
# performs object detection using YoloV3 in an image and return the processed image
import imghdr
import os
import numpy as np
import cv2
import time
class yoloV3_analyzer:
def __init__(self, confidence):
self.confidence = confidence
def InitYoloV3(self):
#global net, ln, LABELS
self.weights = "yolov3.weights"
self.config = "yolov3.cfg"
self.labelsPath = "coco.names"
self.LABELS = open(self.labelsPath).read().strip().split("\n")
self.COLORS = np.random.uniform(0, 255, size=(len(self.LABELS), 3))
self.net = cv2.dnn.readNetFromDarknet(self.config, self.weights)
self.ln = self.net.getLayerNames()
self.ln = [self.ln[i[0] 1] for i in self.net.getUnconnectedOutLayers()]
def ImageProcess(self, image, showLabels = True, showBoundingBox = True):
# Init YOLO if needed
if(self.net is None):
(H, W) = image.shape[:2]
frame = image.copy()
blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)
starttime = time.time()
layerOutputs = self.net.forward(self.ln)
stoptime = time.time()
print("FPS: {:.4f}".format((stoptimestarttime)))
confidences = []
outline = []
class_ids = []
for output in layerOutputs:
for detection in output:
scores = detection[5:]
maxi_class = np.argmax(scores)
confidence = scores[maxi_class]
if confidence > self.confidence:
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")
x = int(centerX (width / 2))
y = int(centerY (height / 2))
outline.append([x, y, int(width), int(height)])
box_line = cv2.dnn.NMSBoxes(outline, confidences, 0.5, 0.3)
if len(box_line) > 0:
flat_box = box_line.flatten()
pairs = []
for i in flat_box:
(x, y) = (outline[i][0], outline[i][1])
(w, h) = (outline[i][2], outline[i][3])
x_plus_w = round(x+w)
y_plus_h = round(y+h)
label = str(self.LABELS[class_ids[i]])
color = self.COLORS[class_ids[i]]
if (showBoundingBox == True):
cv2.rectangle(frame, (x,y), (x_plus_w,y_plus_h), color, 2)
if (showLabels == True):
cv2.putText(frame, label, (x10,y10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
return frame
# Yolo
net = (None)
ln = (None)
LABELS = (None)
frameno = 0
view raw YoloV3Analyzer.py hosted with ❤ by GitHub

Happy coding!


El Bruno


MLOps – How to automate and accelerate the machine learning lifecycle?

This weekend I tried to step aside a little bit of my comfort zone to test what MLOps is all about.

First, lets understand some use cases on how Machine Learning is used across different areas. ML adoption requires a cultural shift and a technology environment with people, processes, and platforms operating in the responsive, agile way organizations are looking to operate today.

What is MLOpens?

Machine Learning Operations (MLOps) draws on DevOps principles and practices. Built upon notions of work efficiency, continuous integration, delivery, and deployment, DevOps responds to the needs of the agile business – in short, to be able to deliver innovation at scale.

How Does MLOps Benefit ML?

MLOps applies DevOps principles and best practices to ML delivery, enabling the delivery of ML-based innovation at scale to result in:

• Faster time to market of ML-based solutions

• More rapid rate of experimentation, driving innovation


View original post 293 more words

#Event – Getting Started with Machine Learning .NET

Buy Me A Coffee

Hi !

I’m back to share some amazing ML.Net Experiences. This time with my friends from [Central Ohio .NET Developer’s Group (CONDG)] with a virtual event on [Thursday, September 24, 2020].

It’s been a while since my Machine Learning.Net sessions, so I’m creating one from scratch to cover all the amazing new features that we have on ML.Net. Please check the registration page for more information:

Getting Started with Machine Learning.NET

If you are a developer and you started to feel outside of the Machine Learning world, this session is for you. In this session, we will review the basics of machine learning, how to use existing models and services in standard developer apps, and how to get started with creating your own simple models.
Important: we don’t need any specific math or data skills to create ML models. We will cover the basis of Machine Learning.Net, a complete ML framework to work with C#, F# or any other .Net Core language.
And I’m a developer first, so do not expect many slides. Instead, we will code a fully functional app from scratch.

Happy coding!


El Bruno

#Podcast – NTN 63 – Microsoft Regional Directors, un poco de tecnología y transformación digital

Buy Me A Coffee


Hoy tenemos la suerte de hablar con Alberto Diaz Martin, David Rodriguez Hernandez y Juan Quijano sobre su experiencia como Microsoft Regional Director. Es super interesante ver como el enfoque de este reconocimiento es orientado 100% hacia el negocio y las diferencias con reconocimientos más técnicos como un Microsoft MVP. Nos cuentan además su experiencias en roles directivos, especialmente sus responsabilidades y visión como CTOs.

Además, Alberto y David son amigos de batallas y viejos rockeros, hacia el final comentamos un poco sobre nuestros puestos de trabajo … y termino aprendiendo cosas que no conocía del Desktop de Windows 10 !


  • Alberto Diaz Martin es Cloud Specialist | Chief Technology and Innovation Officer | Microsoft Regional Director | Microsoft Azure MVP | Office Apps & Services MVP | My passion is my work with technology (LinkedIn)
  • David J. Rodriguez Hernandez es CTO at Intelequia Software Solutions, Microsoft Regional Director and Azure MVP (LinkedIn)
  • Juan Carlos Quijano Abad es Microsoft Certified Trainer, Arquitecto de Soluciones en Azure, Consultor independiente en implantación de DevOps (LinkedIn)
  • Bruno Capuano es Canada Innovation Lead at Avanade and Microsoft AI MVP (LinkedIn)

Happy coding!


El Bruno

#Event – #MSIgnite: The promise of AI and ML – Will it change the world?

Buy Me A Coffee

Hi !

This year I’ll also be at Ignite as part of an amazing group with a great set of experts to talk about solving problems using Artificial Intelligence. I hope we can meet on the next Wednesday, September 23; 3:15 PM – 3:45 PM EDT

The promise of AI and ML – Will it change the world?

Americas Table Talk: We hear great stories about AI and Machine Learning helping solve everyday problems and big challenges, but how do we make sure these tools are being used responsibly? How do we make sure we are training ourselves and the next generation to use AI and Machine Learning ethically? Do the risks outweigh the benefits? Join this scheduled Teams Meeting to join this conversation – Unmute yourself, turn your camera on – No presentations here! Your community peers will lead a discussion on these questions, giving you a chance to meet others in the AI & ML community. Join the Data & AI Community Table on the Tech Community ahead of time to catch up on the conversation beforehand. Be prepared for chat and a fun interactive discussion!

Register: https://myignite.microsoft.com/sessions/8291e4cc-e57c-487f-b690-3f8070bca0e4


Bruno Capuano
Bruno Capuano. Innovation Lead- Avanade

Bruno Capuano is Lead Innovation at Avanade Canada and has been a Microsoft MVP for the past 13 years. He is an active Microsoft Technologies Community participant, a writer, and a frequent speaker at local and international technology conferences. Bruno also like to hack new technologies, play the guitar and run. Run a lot, which suddenly started after becoming the dad of the 2 best sons ever. You can contact him via twitter (@elbruno) or on his blog: http://www.elbruno.com

Veronika Kolesnikova
Veronika Kolesnikova. Senior Software Engineer- Liberty Mutual

Veronika is a Senior Software Engineer in Boston and 2x Microsoft MVP(AI). She holds a Master’s Degree in Information Technology. Veronika is passionate about backend web development, AI, cross-platform solutions and AR. She works mainly with Microsoft technologies like C#, .NET, Xamarin, Azure, but also with Node.js and React.js. Loves challenging herself and learn new development tools and languages. She’s a public speaker, hackathon mentor, author and a co-organizer of Boston Azure user group. In her free time, Veronika likes dancing, traveling and practicing aerial yoga.

Matthew Renze
Matthew Renze. Data Science Consultant- Renze Consulting

Matthew Renze is a data science consultant, author, and public speaker. Over the past two decades, he’s taught over 300,000 software developers and IT professionals. He’s delivered over 100 keynotes, presentations, and workshops at conferences on every continent in the world (including Antarctica). His clients range from Fortune 500 companies to small tech startups around the globe. Matthew is a Microsoft MVP in AI, an ASPInsider, and an author for Pluralsight, Udemy, and Skillshare. He’s also an open-source software contributor. His focus includes artificial intelligence, data science, and machine learning.

Ivana Tilca
Ivana Tilca. Quality Manager- 3XM Group

Ivana Tilca, currently Quality Manager at 3XM Group (https://www.3xmgroup.com) and Microsoft MVP in Artificial Intelligence (https://bit.ly/IvanaMVP), former Microsoft employee was part of the office web apps team in Redmond, Washington and was part of the Microsoft “New technologies” team in Argentina and Uruguay. You can follow me on twitter @ivanatilca

Happy coding!


El Bruno

Manejo de Local Storage en Blazor

Local storage en Blazor y para mas información, el repaso que le dimos a Blazor como tecnología en el podcast


Mteheran's Blog

El manejo de datos en sesión o datos compartir trasversalmente dentro de una aplicación web es algo bastante común y necesario.

El local storage en los navegadores nos permite guardar datos que pueden ser utilizados en todo el contexto de nuestra página y actualizados según se requiera.

Blazor no tiene un soporte nativo para realizar esta actualización ya que Web assembly aún no cuenta con estas funciones. Sin embargo podemos solucionarlo de varias maneras.

  1. Utilizar interpolación con Javascript

Primero debemos crear una funciona para guardar y otra para obtener los datos del storage dentro de un archivo JS

window.setToStorage = function setToStorage(name, value) {
localStorage.setItem(name, value);

window.getToStorage = function getToStorage(name, value) {
localStorage.getItem(name, value);

Luego utilizando IJSRuntime podemos hacer la llamada de esas funciones en código C#

 return await JSRuntime.InvokeAsync("setToStorage", "sessionToke", token);

2. Utilizando la libreria Blazored LocalStorage

La librería más simple para el manejo de local…

View original post 95 more words

#VS2019 – ML.NET Model Builder GPU vs CPU test: 4 times faster !

Buy Me A Coffee

Hi !

Yesterday I wrote about the new options that we have to train models in ML.Net Model Builder. The main new is that we have the option to use our GPU to train models.

Quick recap, Model Builder supports 3 specific training environments

  • Local (CPU)
  • Local (GPU)
  • Azure

Yesterday I tested train a small image recognition model using CPU and GPU and the training time were very similar. The image training set was small, and I also haven’t configured my GPU & CUDA environment, so I decided to raise my own bet and test with something a little more challenging.

For this new test, I’ll use a set of images from a Kaggle challenge [State Farm Distracted Driver Detection] (see references). This is a 1GB image set, 22424images, with ten categories.

Of course, I used Model Builder to train an Image Classifier scenario, here is a preview of the configuration:

CPU Training

This training scenario was much more resource heavy than yesterday easy test. The total time was 39.2 minutes. Here is more detail

Total experiment time : 2353.6729442 Secs
|                                                     Summary                                                    |
|ML Task: image-classification                                                                                   |
|Dataset: C:\Users\bruno\AppData\Local\Temp\5e873581-2dab-4d46-911d-cfc0a0455eb1.tsv                             |
|Label : Label                                                                                                   |
|Total experiment time : 2353.6729442 Secs                                                                       |
|Total number of models explored: 1                                                                              |

GPU Train

Using the GPU, we have 1/4 time over CPU! 9.6 minutes.

Total experiment time : 581.1946062 Secs
|                                                     Summary                                                    |
|ML Task: image-classification                                                                                   |
|Dataset: C:\Users\bruno\AppData\Local\Temp\cccb2b3f-dbce-45e5-b17e-872b6cc3f116.tsv                             |
|Label : Label                                                                                                   |
|Total experiment time : 581.1946062 Secs                                                                        |
|Total number of models explored: 1                                                                              |


GPUs are great for deep learning because the type of calculations they were designed to process are the same as those encountered in deep learning. Images, videos, and other graphics are represented as matrices so that when you perform any operation, such as a zoom-in effect or a camera rotation, all you are doing is applying some mathematical transformation to a matrix.

Even if you have a non-powerful GPU (like me!), you may want to give it a try when you are training a model. ML.Net Model Builder documentation includes a series of steps to configure a CUDA 10.0 environment, which is good enough for most NVIDIA Graphic Cards.

Happy coding!


El Bruno