From GitHub to Azure App Service through Jenkins pipelines

Juanlu, ElGuerre

Desde mi punto de vista Azure DevOps es quizas una de las mejores herramientas como orquestadora de Pipelines para DevOps, al menos con la que me siento más cómodo trabajando y para mi, la mas fácil. Si bien, hoy vamos a ver como trabajar con Jenkins para conseguir el mismo objetivo que ya vimos en el post anterior (con Azure DevOps) y para la aplicación MyBudget.

Configuración

Aunque podemos descargar e instalalar localmente Jenkins, vamos a optar por una instalación en Azure.

  • Crear un nuevo recurso (Create a resource)
  • Seleccionar Jenkins
  • Pulsar el boton de crear y completar los pasos indicados hasta finalmente pulsar “OK”.

Creación de un nuevo recurso “Jenkins” en Azure

  • Una vez creado el servidor Jenkins, tendremos acceso a la VM. Por defecto el protocolo de acceso es HTTP, por lo que no podremos acceder directamente a traves navegador a la url “xxx.cloudapp.azure.com“. Por…

View original post 745 more words

Advertisements

#MacOS – Not so good with Screen Resolutions and “even better” at projection mode

Hi !

The first time I was doing a demo and connecting my MacBook Pro to a projector, I found out that the screen settings and configuration in a MacOS are completely different to the usual Windows experience. And, IMHO, not in a good way.

In Windows, when you are working with 2 or more screens (monitors / projectors) you have the chance to define specific settings for each one. This settings include orientation, resolution, and more.

MacOS display runs at its native resolution, which is the best resolution designed for the display. To change the resolution, you need to open System Preferences and select displays. Then in the [Display section], we can change the resolutions by selecting the [Scaled] option.

So far, so good. Each option also shows a “Looks like resolution …” options which is great. But, not good enough. Before a demo for a client or a user group, I like to dry run my presentations and demos using a custom resolution. You never know, which hardware is going to be at your final destination.

After testing a couple of apps, I finally found a very good one with the features I need:

Disable Monitor, https://github.com/Eun/DisableMonitor

Disable Monitor allows me to quickly change any screen resolution with 2 clicks; it also lock the screens and detect monitors.

Simple and easy. Happy coding!

Greeting @ Toronto

El Bruno

My posts on experiences in Mac from a Windows user

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python (2/N)

Hi !

Yesterday I explained how to write a couple of lines in Python to perform live face detection in a webcam feed [Post]. Check the resources section to find more about the tools I’m using.

Today, I’ll add some more code to perform face recognition. And as usual, I’ll work with my kids to test this out. I’ll start adding 2 face encodings for Valentino and myself. The code is simple enough, and I use a simple 300×300 head-shot photo to train and get the face encoding.

The previous function returns an set of arrays with the face encodings and the face names. In the complete file, I’ll use this to analyze the camera frame (line 31) and later to check the matches for faces (lines 34 * 36)

Last lines are cosmetic to mostly draw the frames for the detected faces, and show the names.

The complete project is available here https://github.com/elbruno/Blog/tree/master/20190521%20Python%20FaceRecognition

Happy Coding!

Greetings @ Burlington

El Bruno

Resources

My Posts

#VSCode – Let’s do some #FaceRecognition with 20 lines in #Python

Hi !

I’ve write a lot about how to use AI models in C# to perform tasks like Face recognition, speech analysis, and more. During the Chicago CodeCamp, someone ask me about how to perform Face Recognition in Python. I didn’t have any working sample to showcase this, and I failed in try to write a 2 min app. So I added this into my ToDo list.

For this demo I’ll use Anaconda as the base Python distribution and Visual Studio Code as the code editor. There are several packages to perform face detection in Python. I’ll use a mix between OpenCV and Adam Geitgey Face Recognition package to use the camera and detect and recognize faces.

I’ll start by installing some packages to use in python app: dlib, openCV and face_recognition

"C:/Program Files (x86)/Microsoft Visual Studio/Shared/Anaconda3_86/python.exe" -m pip install dlib --user  

"C:/Program Files (x86)/Microsoft Visual Studio/Shared/Anaconda3_86/python.exe" -m pip install face_recognition --user

"C:/Program Files (x86)/Microsoft Visual Studio/Shared/Anaconda3_86/python.exe" -m pip install opencv-python --user  

And, the first step will be to detect faces and draw frames around them. All of this in 20 lines of code

When we run the app, we will see the camera feed and frames around the detected faces. In my next post I’ll add some extra code to perform face recognition.

Happy Coding!

Greetings @ Toronto

El Bruno

Resources

#AI – #Translatotron is not dot a dorky name, it’s maybe the best translator ever #GoogleResearch

Hi !

A couple of days ago, Google presented Translatotron. The name is not the best name, however the idea is amazing:

Google researchers trained a neural network to map audio “voiceprints” from one language to another. After the tool translates an original audio, Translatotron retains the voice and tone of the original speaker. It converts audio input directly to audio output without any intermediary steps.

Model architecture of Translatotron.

As usual, the best way to understand this, is to see Translatotron in action. Let’s take a look at the following audios.

Input (Spanish)
Reference translation (English)
Baseline cascade translation
Translatotron translation (canonical voice)
Translatotron translation (original speaker’s voice )

There is a full set of sample audios here: https://google-research.github.io/lingvo-lab/translatotron/#fisher_1

This is an amazing technology, and also a great starting point for scenarios where it’s important to keep original speaker vocal characteristics. And let me be honest, it’s also scary if you think on Fake Voice scenarios.

Happy coding!

Greetings @ Toronto

El Bruno

Source: Introducing Translatotron: An End-to-End Speech-to-Speech Translation Model

Lector de huellas en Xamarin Forms

Jorge Diego Crespo

Tras un largo periodo sin escribir nada, vuelvo a la carga con este interesante y sencillo post en el que voy a explicar como autenticar a un usuario mediante el lector de huellas (o face id en los iPhone que así realicen la autenticación)

El proceso es tan sencillo que bastará con añadir un paquete nuget y escribir algo de configuración en las distintas plataformas. Con esto, nuestra aplicación estaría lista para realizar dicha autenticación.

Añadiendo paquete Plugin.Fingerprint

El paquete que se va a utilizar en este ejemplo es Plugin.Fingerprint cuya implementación la podemos encontrar en https://github.com/smstuebe/xamarin-fingerprint Es necesario añadir este paquete en los tres proyectos de nuestra solución (Android, iOS y común)

Configurando Android

Es necesario añadir los siguientes permisos en el AndroidManifest.xml 

<uses-permission android:name="android.permission.USE_FINGERPRINT" />
<uses-permission android:name="com.samsung.android.providers.context.permission.WRITE_USE_APP_FEATURE_SURVEY" />

Además, hay que añadir la siguiente línea en el MainActivity.cs para que el plugin sea capaz de mostrar un dialogo…

View original post 166 more words

#MLNET – How to use the AutoML API in a Console App

Hi !

In my last posts I was testing AutoML using the Model Builder inside Visual Studio and also the CLI commands. There is also an API to use this in a .Net app, and the usage is very simple.

It all start, of course, adding the [Microsoft.ML.AutoML] nuget package

I read the documentation in [How to use the ML.NET automated machine learning API], and I created the following sample using the same data as in my previous posts.

The final result displays the results for each one of the tests and showcase the top 3 ranked models. This time LightGBM Trainer is one more time the best trainer to choose.

There is a full set of samples in the Machine Learning .Net Samples repository. I’ve reused some classes from the Common folder.

The complete source code is available https://github.com/elbruno/Blog/tree/master/20190516%20MLNET%20AutoML%20API

Happy Coding!

Greetings @ Toronto

El Bruno

References

#MLNET – Are you a Command line user? MLNet CLI is great for some AutoML train tasks!

Hi !

Yesterday I wrote about how easy is to use Model Builder to create Machine Learning models directly from data inside Visual Studio.

If you prefer to work with command line interfaces, Machine Learning.Net AutoML also have a CLI interface and with a couple of commands, you can get some amazing results.

So, for this test I follow the tutorial [Auto generate a binary classifier using the CLI] and make some changes to the original command

> mlnet auto-train --task binary-classification --dataset "yelp_labelled.txt" --label-column-index 1 --has-header false --max-exploration-time 10

I’m using the same set of data I used yesterday and, my command is

mlnet auto-train --task regression --dataset "AgeRangeData03_AgeGenderLabelEncodedMoreData.csv" --label-column-index 2 --has-header true --max-exploration-time 60

The output is also interesting: it suggest to use a FastTree Regression trainer

My yesterday test using the IDE suggested a LightBGM regression trainer.

So, I decided to run the CLI one more time with some more processing time. This time the result is also a FastTree Tegression trainer.

Unless you need to use Visual Studio, this option is amazing for fast tests and you can also use the generated projects!

Happy Coding!

Greetings @ Toronto

El Bruno

References

#MLNET – Testing Machine Learning Model Builder preview. It’s so cool !

Hi !

Last week Machine Learning.Net 1.0 was officially announced during Build 2019, and the ML.Net team also announced a set of ML tools related to ML.Net.

One of the most interesting ones is Machine Learning Model Builder. You can get more information about Model Builder in the official website.

ML.NET Model Builder provides an easy to understand visual interface to build, train, and deploy custom machine learning models. Prior machine learning expertise is not required. Model Builder supports AutoML, which automatically explores different machine learning algorithms and settings to help you find the one that best suits your scenario.

Machine Learning Model Builder

The tool is on Preview, but it’s still an amazing one to play around with ML. So I decided to give it a try with my small data set of kids, the one I use on the Machine Learning.Net demos.

The structure of my CSV file is very simple with just 3 columns: Age, Gender and Label.

However the first time I run the scenario I found the following error.

Inferring Columns ...
Creating Data loader ...
Loading data ...
Exploring multiple ML algorithms and settings to find you the best model for ML task: regression
For further learning check: https://aka.ms/mlnet-cli
|     Trainer                             RSquared Absolute-loss Squared-loss RMS-loss  Duration #Iteration      |
[Source=AutoML, Kind=Trace] Channel started
Exception occured while exploring pipelines:
Provided label column 'Label' was of type String, but only type Single is allowed.
System.ArgumentException: Provided label column 'Label' was of type String, but only type Single is allowed.
   at Microsoft.ML.CLI.Program.<>c__DisplayClass1_0.<Main>b__0(NewCommandSettings options)
   at Microsoft.ML.CLI.CodeGenerator.CodeGenerationHelper.GenerateCode()
Please see the log file for more info.
Exiting ...

Which makes a lot of sense, my Label column is a String and the Model Builder expects a Single data type. So, I updated my data file replacing the labels with numbers and I was ready for a 2nd test.

This time the training process started fine, however I noticed that using just a small training dataset didn’t trigger any comparing between different algorithms. So I created a much bigger training dataset, and now I got the training process up and running.

At the end the results are the ones below. And it’s very interesting. I do most of my demos using a MultiClass SDCA trainer and AutoML suggest me to use a LightGBM trainer. This will be part of my Machine Learning.Net speech for sure in the future.

You can download the Visual Studio extension from https://dotnet.microsoft.com/apps/machinelearning-ai/ml-dotnet/model-builder and remember that we can talk about this on the Visual Studio 2019 event with the Mississauga .Net User Group in a couple of weeks!

Happy Coding!

Greetings @ Toronto

El Bruno