#VS2019 – Let’s do some image classification with #MLNET Model Builder! (AKA, let’s create an image classifier model without a line of code)

Buy Me A Coffee

Hi!

I’m getting ready for my last event of the year, and I just realize that in the latest update of Model Builder, we have the chance to build our own Image Classifier scenario. Let’s start with the official Model Builder definition (see references):

ML.NET Model Builder provides an easy to understand visual interface to build, train, and deploy custom machine learning models. Prior machine learning expertise is not required. Model Builder supports AutoML, which automatically explores different machine learning algorithms and settings to help you find the one that best suits your scenario.

Working with images was supported for a while in Machine Learning.Net. In the Machine Learning .Net Samples, we have sample scenarios like

Image Classification Model Training – Preferred API (Based on native TensorFlow transfer learning)

In this sample app you create your own custom image classifier model by natively training a TensorFlow model from ML.NET API with your own images.

We even have an amazing tutorial, to create our own image classification model from zero

Tutorial: Generate an ML.NET image classification model from a pre-trained TensorFlow model

Learn how to transfer the knowledge from an existing TensorFlow model into a new ML.NET image classification model. The TensorFlow model was trained to classify images into a thousand categories. The ML.NET model makes use of part of the TensorFlow model in its pipeline to train a model to classify images into 3 categories.

Training an Image Classification model from scratch requires setting millions of parameters, a ton of labeled training data and a vast amount of compute resources (hundreds of GPU hours). While not as effective as training a custom model from scratch, transfer learning allows you to shortcut this process by working with thousands of images vs. millions of labeled images and build a customized model fairly quickly (within an hour on a machine without a GPU). This tutorial scales that process down even further, using only a dozen training images.

And now, I found that Model Builder, also supports an Image Classification Scenario.

It follows the Model Builder standard workflow, starting with the selection of the scenario:

model builder select scenario

And then selecting a folder with the Images.

model builder images for training

Important: Model Builder expects image data to be JPG or PNG files organized in folders that correspond to the categories of the classification.

To load images into Model Builder, provide the path to a single top-level directory:

  •     This top-level directory contains one subfolder for each of the categories to predict.
  •     Each subfolder contains the image files belonging to its category.

Once the folder is selected, we can see a preview of the images and labels loaded from the folder.

model builder folder selected image preview

For more information about how to organize images for this scenario, refer to Load training data into Model Builder.

And now we start the training process. This may take a while, depending on your hardware. I’m using the sample set of drawings that we used on the InsiderDev Tour, for Custom Vision. These are 24 drawings images, with 3 labels, and in a PC with a I7, 32GB of Ram and an SSD, the training process took a little longer than 2 minutes.

model builder train images complete

Once the training is complete, we have a decent accuracy in our model, so it’s time to test. Before Model Builder last step, we have the chance to test the model with some test images.

Using one of the images that I created at Ignite in Orlando, the trained model get’s a human with a 99% of accuracy.

model builder model trained test image

And, the final step is to add the generated model and code to our project. I’ll write about how to use this generated code on the near future.

model builder code generated

Happy Coding!

Greetings @ Burlington

El Bruno

References

#AI – Getting started with #ComputerVision, #DeepLearning, and #OpenCV by Adrian Rosebrock @pyimagesearch

display face landmarks in python using face recognition
display face landmarks in python using face recognition
Buy Me A Coffee

Hi!

When you start to research the amazing world of Computer Vision, you find that there are plenty of courses, tutorials, videos and more resources. Something is kind of “too much”, and it’s not easy to choose where to start.

That’s why, when you arrive to one of the Adrian Rosebrock tutorials or articles, they’ll end in one of your favorites bookmarks. He has amazing detailed step by step tutorials, and I learned a lot of Raspberry Pi and OpenCV from his website.

A couple of weeks ago, Adrian released an amazing resource for Computer Vision enthusiasts:

Need help getting started with Computer Vision, Deep Learning, and OpenCV?

No matter if you are starting from zero, have some knowledge or you are already an expert; you must look at this amazing compile of resources. I’ll copy and paste the main topics

And I can’t thanks enough Adrian for his amazing work and also, for sharing all of this!

Happy coding!

Greetings @ Toronto

El Bruno

#Net – Learning C # using only a browser

Hi!

A few days ago, at a university, one of the attendees in a session asked me what was the best way to learn C#. As always, this question depends on the level of the person who wants to learn, although it has served as an excuse to see what’s new in the .Net ecosystem.

Well, in addition to the (now) classic Microsoft Docs, I found a resource that is rather interesting

.NET In Browser tutorial

The format is rather simple, a small tutorial on basic concepts of C #, a web code editor and the chance to compile and see the result on the same web.

I1

As entry point for concepts such as handling of strings, arrays, etc. it seems like a good starting point to start.

Greetings @ Burlington

El Bruno

References

#Net – Aprendiendo C# utilizando solo un navegador

Buenas!

Hace unos días, en una universidad, uno de los asistentes en una sesión me pregunto cual era la mejor forma de aprender C#. Como siempre esta pregunta depende del nivel de la persona que quiere aprender, aunque me ha servido de excusa para ver que hay de nuevo en el ecosistema .Net.

Pues bien, además de los clásicos Docs, he encontrado un recurso que es más bien interesante

.NET In Browser tutorial

El formato es mas bien simple, un pequeño tutorial sobre conceptos básicos de C#, un editor web de código y la capacidad de compilar y ver el resultado en la misma web.

I1

Como punto de entrada para conceptos como manejo de strings, arrays, etc. me parece un buen punto de partida para comenzar.

Saludos @ Burlington

El Bruno

References

#Tutorial – How to build an #Amazon #AlexaSkill using Visual Studio 2017, C# and #Azure (5 on N)

Hi!

Now that I have written about how to use C # with Visual Studio 2017 to connect an Alexa Skill with an EndPoint in Azure, I think it’s time to share some contents which should be at the beginning of the series:

How to design and plan user interactions with Alexa

Of course, I will not write about it because the account of Alexa Developers has a great series of short videos about it. Here is the list of videos

How to Choose Projects for Voice – Amazon Alexa Voice Design Guide

Design for Voice in 3 Steps – Amazon Alexa Voice Design Guide

How to Understand Intents? – Amazon Alexa Voice Design Guide

How To Interact with Alexa – Amazon Alexa Voice Design Guide

And if we want something more for developers the next series of 6 episodes is what we should add to our list of videos

Developing Alexa Skills – Hello, Alexa! (Part 1 of 6)

 

Happy Coding!

Greetings @ Toronto

El Bruno

#Tutorial – Como crear un #Amazon #AlexaSkill con Visual Studio 2017, C# y #Azure (5 de N)

Buenas!

Ahora que ya he escritor sobre cómo utilizar C# con Visual Studio 2017 para conectar un Alexa Skill con un EndPoint en Azure, creo que es momento de escribir lo que debería ser el 1er post:

Como se debe diseñar y planificar las interacciones de usuario con Alexa

Eso sí, no voy a escribir al respecto ya que la cuenta de Alexa Developers tiene una serie grandiosa de videos cortos al respecto. He aquí la lista

How to Choose Projects for Voice – Amazon Alexa Voice Design Guide

Design for Voice in 3 Steps – Amazon Alexa Voice Design Guide

How to Understand Intents? – Amazon Alexa Voice Design Guide

How To Interact with Alexa – Amazon Alexa Voice Design Guide

Y si queremos algo más para developers la siguiente serie de 6 episodios es lo que debemos agregar a nuestra lista de

Developing Alexa Skills – Hello, Alexa! (Part 1 of 6)

 

Happy Coding!

Saludos @ Toronto

El Bruno

#Tutorial – How to build an #Amazon #AlexaSkill using Visual Studio 2017, C# and #Azure (4 de N)

Hi!

In the last post comment how to try a Skill using the traditional interface that is provided in the edition of an Alexa Skill. Well, another option that we have in Beta mode is [Test Simulator]

I1

This new interface allows us to make a couple of interesting actions. On the one hand, it allows us to enter the text with which we want to test our Skill in the same way that we would with an Amazon Echo device.

For example, if we would like [Help] to [Bruno Events] it will return the following results

I2

In this simulator we can listen to the same output that we will have in an Amazon Echo Device. The result for the Hololens Event question is similar

I3

And, finally we can try the Speech Recognition Features, since using the microphone of our computer, we can test our Skill directly with our voice.

I4

This allows us to try our Skill in the most natural way possible, even without having an Amazon Echo device.

Happy Coding!

Greetings @ Toronto

El Bruno

#Tutorial – Como crear un #Amazon #AlexaSkill con Visual Studio 2017, C# y #Azure (4 de N)

Buenas!

En el último post comente como probar un Skill utilizando la interfaz tradicional que se provee en la edición de un Alexa Skill. Pues bien, otra opción que tenemos en modo Beta es [Test Simulator]

I1

Esta nueva interfaz nos permite hacer un par de acciones interesantes. Por un lado, nos permite ingresar el texto con el que deseamos probar nuestra Skill de la misma manera que lo haríamos con un dispositivo Amazon Echo.

Por ejemplo, si soliciatmos [Help] a [Bruno Events] el mismo nos retornara lo siguiente

I2

Lo interesante de este simulador es que podremos escuchar el mismo output que tendremos en el Amazon Echo Device. El resultado para la pregunta de eventos de Hololens es similar

I3

Y, finalmente podremos probar las Features de Speech Recognition, ya que utilizando el micrófono de nuestro ordenador, podremos probar nuestra Skill directamente con nuestra voz.

I4

Esto nos permite probar nuestro Skill de la forma más natural posible, aun sin tener un Amazon Echo device.

Happy Coding!

Saludos @ Toronto

El Bruno

#Tutorial – How to build an #Amazon #AlexaSkill using Visual Studio 2017, C# and #Azure (3 on N)

Hi !

In the previous post describe the steps necessary to create a WebSite that interacts with an Alexa Skill. The next step is to publish the website in Azure. There are many tutorials on how to do this, so I will not write the necessary steps (see references)

In my case I published my Alexa Skill Website in the following url

http://alexaskillbrunoevents.azurewebsites.net/

I1

It is important to remember the URL since it is the one that we will use in the configuration of our Skill.

Now it’s time to go back to the Amazon Developer Portal and edit the Skill we created in the 1st post. In the section [Configuration] we must define the location of the interaction endpoint, which in this case is our Azure WebApp

I2

Important: By default the url we got from Azure is a HTTP site, we should change this to HTTPS to be used in Amazon Alexa Skill.

The other values ​​in this section are left with their default values ​​and we go to the [SSL Certificate] section. In the case of Azure sites, they already have a certificate for safe navigation. We must select the 2nd option.

I3

And finally we get to the testing section [Test]. In it we can see, in JSON mode, the different messages that are sent from Alexa to our EndPoint. For example, if we invoke something simple like “help” we will have the following result

I4

{
"version": "1.0",
"response": {
"outputSpeech": {
"text": "Intent called AMAZON.HelpIntent.",
"type": "PlainText"
},
"speechletResponse": {
"outputSpeech": {
"text": "Intent called AMAZON.HelpIntent."
},
"shouldEndSession": true
}
},
"sessionAttributes": {
"intentSequence": "GetEventInformationIntent;GetEventInformationIntent;AMAZON.HelpIntent",
"EventTopic": "hololens"
}
}

If for example we invoke something more complicated than using Slots, we will see a different result. In this example, I ask Alexa for events about Hololens

I5

{
"version": "1.0",
"response": {
"outputSpeech": {
"text": "Intent called GetEventInformationIntent. Slots used: \r\nEventTopic – hololens , ",
"type": "PlainText"
},
"speechletResponse": {
"outputSpeech": {
"text": "Intent called GetEventInformationIntent. Slots used: \r\nEventTopic – hololens , "
},
"shouldEndSession": true
}
},
"sessionAttributes": {
"intentSequence": "GetEventInformationIntent;GetEventInformationIntent;AMAZON.HelpIntent;GetEventInformationIntent;GetEventInformationIntent;GetEventInformationIntent;AMAZON.HelpIntent;GetEventInformationIntent;GetEventInformationIntent",
"EventTopic": "hololens"
}
}

Well, this gives an idea about how to connect an Azure Api App to be the EndPoint of an Alexa Skill. In the following posts we will see more details on how to add extra functionality to the backend and how to use the new Alexa test environment.

Happy Coding!

Greetings @ Toronto

El Bruno

References

#Tutorial – Como crear un #Amazon #AlexaSkill con Visual Studio 2017, C# y #Azure (3 de N)

Buenas!

En el post anterior describir los pasos necesarios para crear un WebSite que interactúe con un Alexa Skill. El siguiente paso es publicar el mismo en Azure.  Hay muchos tutoriales sobre como hacer esto, así que no escribiré los pasos necesarios (ver referencias)

En mi caso he publicado el site en la siguiente url

http://alexaskillbrunoevents.azurewebsites.net/

I1

Es importante recordar la URL ya que es la que utilizaremos en la configuración de nuestro Skill.

Ahora es momento de abrir nuevamente el Amazon Developer Portal y editar el Skill que creamos en el 1er post. En la sección [Configuration] debemos definir la ubicación del endpoint de interaccion, que en este caso es nuestro site de Azure

I2

Importante: Por defecto la publicación se realiza con http, debemos cambiar el mismo por https.

Los demás valores de esta sección los dejamos con sus valores por defecto y pasamos a la sección [SSL Certificate]. En el caso de los sites en Azure, los mismos poseen ya un certificado para la navegación segura. Debemos seleccionar la 2da opción.

I3

Y finalmente llegamos a la sección de pruebas [Test]. En la misma podremos ver, en modo JSON, los diferentes mensajes que se envían desde Alexa a nuestro EndPoint. Por ejemplo, si invocamos algo simple como “help” tendremos el siguiente resultado

I4

{
"version": "1.0",
"response": {
"outputSpeech": {
"text": "Intent called AMAZON.HelpIntent.",
"type": "PlainText"
},
"speechletResponse": {
"outputSpeech": {
"text": "Intent called AMAZON.HelpIntent."
},
"shouldEndSession": true
}
},
"sessionAttributes": {
"intentSequence": "GetEventInformationIntent;GetEventInformationIntent;AMAZON.HelpIntent",
"EventTopic": "hololens"
}
}

Si por ejemplo invocamos algo mas complicado que utilice Slots, veremos un resultado diferente. En este ejemplo, le pregunto a Alexa por eventos sobre Hololens

I5

{
"version": "1.0",
"response": {
"outputSpeech": {
"text": "Intent called GetEventInformationIntent. Slots used: \r\nEventTopic – hololens , ",
"type": "PlainText"
},
"speechletResponse": {
"outputSpeech": {
"text": "Intent called GetEventInformationIntent. Slots used: \r\nEventTopic – hololens , "
},
"shouldEndSession": true
}
},
"sessionAttributes": {
"intentSequence": "GetEventInformationIntent;GetEventInformationIntent;AMAZON.HelpIntent;GetEventInformationIntent;GetEventInformationIntent;GetEventInformationIntent;AMAZON.HelpIntent;GetEventInformationIntent;GetEventInformationIntent",
"EventTopic": "hololens"
}
}

Pues bien, esto da una idea sobre como conectar un Azure Api App para que sea el EndPoint de un Alexa Skill. En los siguientes posts veremos mas detalles sobre como agregar funcionalidad extra al backend y sobre cómo utilizar el nuevo entorno de pruebas de Alexa.

Happy Coding!

Saludos @ Toronto

El Bruno

References