Aplicaciones en tiempo real con Blazor y SignalR – #BlazorForAll

#JavaScript – Console-log-viewer.js an amazing script to overlay console in your HTML apps

Buy Me A Coffee

Hi !

Yes, I know, I’m writing about JavaScript. It’s 2020, so anything can happen, and here I am, blogging about JavaScript.

So, I’m learning a lot of TensorFlowJs, and having some kind of logging is a must. I’m a huge fan of Edge Developer Tools, however in some scenarios, like a browser in a smartphone, just have the chance to display the console.log() in the app is a super useful feature.

And, that’s the main idea of [console-log-viewer.js] (see references), just add an script into your HTML and magic!

console-log-viewer.js displays logs and Javascript errors in an overlay on top of your site. Useful for mobile webdevelopment. Installed in 5 seconds [1]. Works on every device and browser [2].

console-log-viewer.js

Just add this script into the Html, and refresh the page.

<script src="https://markknol.github.io/console-log-viewer/console-log-viewer.js"></script>

There are other options, like start minimized, at the bottom, etc.

In the following animation, I’m checking the Console Log, while I’m loading and using a Machine Learning model in a browser to detect the use of Face Masks.

Happy coding!

Greetings

El Bruno

References

console-log-viewer.js

#ML.NET Image Classification with ML.NET Command-Line Interface (CLI), no #VS2019 needed !

Buy Me A Coffee

Hi !

I haven’t write a lot about ML.NET Command-Line Interface (CLI) (see references), and that’s a shame. It’s very powerful and also it supports most of the ML.Net features, like in example: Model Builder Image Classification scenario.

The ML.NET CLI automates model generation for .NET developers.
To use the ML.NET API by itself, (without the ML.NET AutoML CLI) you need to choose a trainer (implementation of a machine learning algorithm for a particular task), and the set of data transformations (feature engineering) to apply to your data. The optimal pipeline will vary for each dataset and selecting the optimal algorithm from all the choices adds to the complexity. Even further, each algorithm has a set of hyperparameters to be tuned. Hence, you can spend weeks and sometimes months on machine learning model optimization trying to find the best combinations of feature engineering, learning algorithms, and hyperparameters.
The ML.NET CLI simplifies this process using automated machine learning (AutoML).

ML.Net CLI Documentation

So, let’s take a look at the basic sample for an Image Recognition scenario with ML.Net CLI. It starts with a set of images, organized in folders. Each folder will represent the label for the image.

The only required parameter is [–dataset], with the location if the folder with images.

I’ll use the following command to also define the output project name and the log file location

mlnet image-classification --dataset "SimpleDrawings" --log-file-path "d:\src\Labs\mlnet\logs" --name "SimpleDrawingsBlog"

The process takes sometime, and when it is finished, it generates 2 projects, to consume the model and to generate a new model.

Note: somehow the parameter -name, didn’t generate the specified name for my output project. Time to refactor later.

mlnet 04 visual studio

Testing the generated model with a custom image, give us the following ouput.

2020-10-19 14:09:47.512322: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Using model to make single prediction -- Comparing actual Label with predicted Label from sample data...


ImageSource: D:\src\Labs\mlnet\SimpleDrawings\fish\i-1238b28c03144be496b7f54d82667fc6.jpg


Predicted Label value fish
Predicted Label scores: [0.98024565,0.015911957,0.0038424365]


=============== End of process, hit any key to finish ===============

Which comes, from the predicted label (top scorer), and also the scores for each one of the labels.

            // Create single instance of sample data from first line of dataset for model input
            ModelInput sampleData = new ModelInput()
            {
                ImageSource = @"D:\src\Labs\mlnet\SimpleDrawings\fish\i-1238b28c03144be496b7f54d82667fc6.jpg",
            };

            // Make a single prediction on the sample data and print results
            var predictionResult = ConsumeModel.Predict(sampleData);

            Console.WriteLine("Using model to make single prediction -- Comparing actual Label with predicted Label from sample data...\n\n");
            Console.WriteLine($"ImageSource: {sampleData.ImageSource}");
            Console.WriteLine($"\n\nPredicted Label value {predictionResult.Prediction} \nPredicted Label scores: [{String.Join(",", predictionResult.Score)}]\n\n");
            Console.WriteLine("=============== End of process, hit any key to finish ===============");

Super easy !

Happy coding!

Greetings

El Bruno

Resources

[Learning] Angular + NodeJS para un “.NET-tero (C#)”

Juanlu, ElGuerre

Como fanático, enamorado y apasionado del mundo DotNet/DotNetCore (C#), durante las últimas semanas, he podido dedicar tiempo a incrementar mis Skills, tanto en Frontend (Angular) como en un nuevo lenguaje de backend (nodejs) y mucho, mucho Visual Studio Code.

Hasta ahora, siempre he sido totalmente autodidacta dedicando mucho esfuerzo en leer, usar Google, Stack Overflow y mucha practica (prueba + error), etc. Si bien, en esta ocasión, he decidido hacer un curso con el objetivo de profundizar mucho mas y conocer todo este nuevo mundo.

He de decir, que tras finalizarlo (32.5h codificando + pruebas personales, cacharreos varios, etc.) me siento muy satisfecho con lo aprendido. Tanto es así, que el día a día ya me ha llevado a proyectos reales por estos derroteros, facilitándome mucho el trabajo y, lo mas importante, ahorrándome tiempo.

Para los que me conocen, creí que nunca lo…

View original post 351 more words

#Podcast – NTN 67 – 2 grandes historias de experiencias y comunidades técnicas gracias a 2 super grandiosas Microsoft MVPs !

Buy Me A Coffee

Buenas!

Lo comenté en Twitter, me ha costado encontrar el título de este episodio. Mar y Rocio se han encargado de ayudarnos a pasar un rato genial compartiendo sus experiencias profesionales y personales.

En el camino, quedaron recuerdos de amigos comunes (saludos a la flia Bisbe!); eventos que se han convertidos en virtuales, la adaptación a este año 2020, y mucho más.

Bonus: Sin conocer previamente a Mar y Rocio, hemos pasado un rato genial ! Os recomiendo escuchar el audio.

Speakers

  • Maria del Mar Llambí es MVP Biz Apps 🔸 M365/Power Platform Architect in KABEL (LinkedIn)
  • Rocio Romero López es Cloud Solutions​ Specialist | MVP Office Apps & Services (LinkedIn)
  • Juan Carlos Quijano Abad es Microsoft Certified Trainer, Arquitecto de Soluciones en Azure, Consultor independiente en implantación de DevOps (LinkedIn)
  • Bruno Capuano es Canada Innovation Lead at Avanade and Microsoft AI MVP (LinkedIn)

Happy coding!

Greetings

El Bruno

#VS2019 – ML.NET Model Builder for Object Detection, be careful with file names and VoTT

Buy Me A Coffee

Hi !

This is a sequel part after yesterday post on the new Model Builder Object Detection scenario. I’ve found this error, and it took me some time to figure out the reason.

As usual, an error on the Model Builder, with very few details and the suggestion to check more details on the Azure ML Portal.

On the Azure ML Portal, we can see that 3 experiments failed. Now is time to fig into the logs files and try to figure out what happened.

mlnet 02 azure ml portal error

On the main experiment, after reading the logs, I found out that there is a [File missing for image]. And other file related issues.

mlnet 03 azure portal error file missing

And, after some research (test, fail and test again)

I realized that some of my original file names have a space on the file name.

In example [apple 04.png].

When we create the labeling project using VoTT, the exported project makes some changes into the file names and [apple 04.png] becomes [apple%2004.png]. Here is a part of the exported project:

{
    "name": "mlnet bad file name",
    "securityToken": "mlnet bad file name Token",
    "videoSettings": {
        "frameExtractionRate": 15
    },
    "tags": [
        {
            "name": "apple",
            "color": "#5db300"
        }
    ],
    "id": "aajYgiNHg",
    "activeLearningSettings": {
        "autoDetect": false,
        "predictTag": true,
        "modelPathType": "coco"
    },
    "version": "2.2.0",
    "lastVisitedAssetId": "3d30adce06faf9bf8c6540ec6435ef61",
    "assets": {
        "3d30adce06faf9bf8c6540ec6435ef61": {
            "asset": {
                "format": "png",
                "id": "3d30adce06faf9bf8c6540ec6435ef61",
                "name": "apple%2004.png",
                "path": "file:C:/ML/FreshFood/mlnet%20bad%20file%20names/apple%2004.png",
                "size": {
                    "width": 584,
                    "height": 510
                },
                "state": 2,
                "type": 1
            },
            "regions": [
                {
                    "id": "4BDOAk0Xq",
                    "type": "RECTANGLE",
                    "tags": [
                        "apple"
                    ],
                    "boundingBox": {
                        "height": 505.0856531049251,
                        "width": 579.0878504672897,
                        "left": 1.782359813084112,
                        "top": 1.6381156316916488
                    },

The solution, was to use a tool and remove all the spaces and non-standard characters from my image data set. I also did all the labeling process again, and when I launched again my Object Recognition scenario from Model Builder, everything worked fine!

Happy coding!

Greetings

El Bruno

Resources

#VS2019 – ML.NET Model Builder for Object Detection using #Azure Compute #mlnet #objectdetection

Buy Me A Coffee

Hi !

There is a new scenario available in ML.Net Model Builder for Visual Studio 2019: Object Detection. This scenario is not just image tagging, this scenario allows us to detect objects in an image, and get the specific coordinates and size of the detected objects. As usual, it requires a starting data set with images and labels.

Model Builder Object Detection

This one is super helpful and is also very easy to use. Let’s start with the 1st step

Add a new Machine Learning element in a Visual Studio project, and select Object Detection scenario.

select object detection scenario

This scenario only supports Azure training environment. Lucky me, I already have a Azure ML environment up and running (see references for my previous posts).

Next step is to define the data source for the scenario. In order to do this, we need to define an Image Labeling format file. The suggested tool to do this is VoTT (see references). You need to install the local version for VoTT. As far as I understand, for this Model Builder scenario, it only support VoTT projects using local files.

Define Source Images and Labels

For this post, I created a sample project using Fresh Food labels. Just 4

  • apple
  • apple rotten
  • banana
  • banana rotten

Important: The source connection will define the path where for the source images that will be used for labeling. Target connection defines the project location and also the path for the project exported file.

vott project settins

Once we defined our project properties, we can start defining tags and labeling our objects in the project images. This is a manual process, and it can be tedious.

Important: VoTT support custom detailed areas as the ones you see in the image below. For the Model Builder Scenario, these custom areas will be transformed into standard rectangles.

At any moment, you can save and export your project. This will generate a new folder in the [Target Connection] location named [vott-json-export]. In this folder we can find all the used and labeled images and also the exported JSON file that we will use on Visual Studio.

This is sample content of the exported file:

{
    "name": "MLNet Fresh Food",
    "securityToken": "MLNet Fresh Food Token",
    "videoSettings": {
        "frameExtractionRate": 15
    },
    "tags": [
        {
            "name": "apple_rotten",
            "color": "#5db300"
        },
        {
            "name": "apple",
            "color": "#e81123"
        },
        {
            "name": "banana",
            "color": "#6917aa"
        },
        {
            "name": "banana_rotten",
            "color": "#015cda"
        }
    ],
    "id": "uF6EQo2eX",
    "activeLearningSettings": {
        "autoDetect": false,
        "predictTag": true,
        "modelPathType": "coco"
    },
    "version": "2.2.0",
    "lastVisitedAssetId": "27f36f1bbeb5abc8a505d677931d8e1d",
    "assets": {
        "27f36f1bbeb5abc8a505d677931d8e1d": {
            "asset": {
                "format": "png",
                "id": "27f36f1bbeb5abc8a505d677931d8e1d",
                "name": "bananarotten08.png",
                "path": "file:C:/ML/FreshFood/mlnet/bananarotten08.png",
                "size": {
                    "width": 568,
                    "height": 298
                },
                "state": 2,
                "type": 1
            },
            "regions": [
                {
                    "id": "VVg0KLN6d",
                    "type": "RECTANGLE",
                    "tags": [
                        "banana_rotten"
                    ],
                    "boundingBox": {
                        "height": 158.57173447537474,
                        "width": 470.0359550561798,
                        "left": 21.26011235955056,
                        "top": 24.248394004282655
                    },
                    "points": [
                        {
                            "x": 21.26011235955056,
                            "y": 24.248394004282655
                        },
                        {
                            "x": 491.29606741573036,
                            "y": 24.248394004282655
                        },
                        {
                            "x": 491.29606741573036,
                            "y": 182.8201284796574
                        },
                        {
                            "x": 21.26011235955056,
                            "y": 182.8201284796574
                        }
                    ]
                }
            ],
            "version": "2.2.0"
        },

Back again in Visual Studio, we can select this file as the INPUT Json file for our scenario.

mlnet object detection definicion using the exported file

Train the Model

Everything is ready, let’s train the model, using Azure Compute ;D

The process is similar to the Image Classification scenario, it all start uploading the sources images to a Blob in Azure, and then launch 4 Azure ML Experiments

  • AutoML
  • Preparation
  • Script
  • HyperDrive

Once the process is complete, we can also check output metrics like Precision and Accuracy. This one is not the best model, however I consider this a great starting point for my demo blog.

If you want to know more about Azure ML, this is an great opportunity to play around with these experiments. The Script Experiment (Run 16 in the image) has a lot to dig, like the complete output log, some training python files, the generated ONNX model, and more.

mlnet 11 object detection azure track 01

Test and consume the generated Model

Back to Visual Studio we can check the model metrics and also the Best Model (I have so many questions here!).

And we can test the generated model. As you can see in the next images, using a threshold with value 50, the results are good. Still not detecting the rotten apple with the good ones, so I maybe need to figure out some strategies to improve my model.

sample test detecting apples
sample test detecting bananas

Remember that all these features are still in preview, however they are an amazing starting point and tool to use and learn.

Happy coding!

Greetings

El Bruno

Resources

Crear valores de configuración para Blazor

Mteheran's Blog

Blazor al igual que todas los tipos apps de .NET Core utilizan el archivo appsettings.json para guardar la configuración que queramos utilizar dentro de la App de tal manera que cuando esta configuración cambie la App cambie su comportamiento y se ajuste a la nueva configuración.

En Blazor el archivo de configuración debe crear en la carpeta wwwroot siguiendo el mismo patrón en .NET Core donde podemos también agregar el ambiente que representa la variable de entorno.

wwwroot/appsettings.json
wwwroot/appsettings.{ENVIRONMENT}.json

En este ejemplo creo una variable de configuración apiUrl para guardar la ruta de la API a la que se va ha conectar:

{
    "apiUrl": "http://localhost:5050/api"
}

Y luego simplemente para usarla en una clase como en este caso UserService, recibo en el constructor la interfaz de configuration IConfiguration

 private HttpClient Http {get;set;}
 private string ApiUrl {get;set;}

        public UserService(HttpClient http, IConfiguration Configuration)
        {
            this.Http = http;
            this.ApiUrl = Configuration["apiUrl"];
        }

También…

View original post 36 more words

Comprehensive Guide To Create an Azure Kubernetes Service with Az CLI

Roy Kim on Azure, Office 365 and SharePoint

Objective: To provide key configuration parameters for the beginner when creating the Azure Kubernetes Service with Azure CLI.

My design with related az aks create command configuration parameters.

You can find the az aks create command’s documentation provides a list of about 60 parameters.

az aks create --name
              --resource-group
              [--aad-admin-group-object-ids]
              [--aad-client-app-id]
              [--aad-server-app-id]
              [--aad-server-app-secret]
              [--aad-tenant-id]
              [--admin-username]
              [--api-server-authorized-ip-ranges]
              [--attach-acr]
              [--ca-profile]
              [--client-secret]
              [--disable-rbac]
              [--dns-name-prefix]
              [--dns-service-ip]
              [--docker-bridge-address]
              [--enable-aad]
              [--enable-addons]
              [--enable-cluster-autoscaler]
              [--enable-managed-identity]
              [--enable-node-public-ip]
              [--enable-private-cluster]
              [--enable-rbac]
              [--generate-ssh-keys]
              [--kubernetes-version]
              [--load-balancer-idle-timeout]
              [--load-balancer-managed-outbound-ip-count]
              [--load-balancer-outbound-ip-prefixes]
              [--load-balancer-outbound-ips]
              [--load-balancer-outbound-ports]
              [--load-balancer-sku]
              [--location]
              [--max-count]
              [--max-pods]
              [--min-count]
              [--network-plugin {azure, kubenet}]
              [--network-policy]
              [--no-ssh-key]
              [--no-wait]
              [--node-count]
              [--node-osdisk-diskencryptionset-id]
              [--node-osdisk-size]
              [--node-vm-size]
              [--nodepool-labels]
              [--nodepool-name]
              [--nodepool-tags]
              [--outbound-type {loadBalancer, userDefinedRouting}]
              [--pod-cidr]
              [--service-cidr]
              [--service-principal]
              [--skip-subnet-role-assignment]
              [--ssh-key-value]
              [--subscription]
              [--tags]
              [--uptime-sla]
              [--vm-set-type]
              [--vnet-subnet-id]
              [--windows-admin-password]
              [--windows-admin-username]
              [--workspace-resource-id]
              [--zones {1, 2, 3}]

I will explain in detail 16 of my recommended parameters and its values for starting out a development environment with the intent of leaning towards a production design. Hope this can help kick start your…

View original post 1,081 more words

#Windows10 – My own list of prerequisites to install dlib: Visual Studio C++, CMake and Cuda dlls

Buy Me A Coffee

Hi !

Another blog post reminder, this time related on prerequisites to install dlib. And before start, a dlib description:

Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real world problems. It is used in both industry and academia in a wide range of domains including robotics, embedded devices, mobile phones, and large high performance computing environments. Dlib’s open source licensing allows you to use it in any application, free of charge.

dlib.net

Dlib is a key piece in computer vision processes, so every time I deploy a new Windows 10 instance, I need to install DLib and here are some prerequisites or, as I like to call them, lessons learned on the hard way.

Visual Studio for C++

One of the key requisites is [Visual Studio for C++]. In my default VS installation, I don’t install C++, here is my excuse to install this again.

Install CMake

Install CMake is an easy one, and somehow, I always forget to do this. More information in references section, and if you don’t know what’s CMake:

CMake is a cross-platform free and open-source software tool for managing the build process of software using a compiler-independent method. It supports directory hierarchies and applications that depend on multiple libraries. It is used in conjunction with native build environments such as Make, Qt Creator, Ninja, Apple’s Xcode, and Microsoft Visual Studio. It has minimal dependencies, requiring only a C++ compiler on its own build system.

CMake, Wikipedia

All to avoid this

Cuda Path and cudnn64_7.dll

And the last one:

  • Download cudnn64_7.dll from https://developer.nvidia.com/cudnn
  • Copy the cudnn64_7.dll into the %CUDA_PATH%/bin directory
  • As reference, my CUDA Path is: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin

This one was deep in stack overflow and is super important!

Happy coding!

Greetings

El Bruno

References