#Event – Global AI Bootcamp Toronto on Dec 12 🤖🤖🤖 @GlobAICommunity

Buy Me A Coffee

Hi !

This started to be my annual tradition: Global AI Bootcamp is the last event of the year!
And this year, we had a set of amazing session planned with the following topics:

  • Introduction to AI and Cognitive Services for Developers and Information Workers
  • Understanding Autonomous systems, Machine Teaching and Bonsai Platform
  • Knowledge Mining and MLOps

We have speakers from all around Canada and a special guest from the South !

As usual the event is free, no food this year (sorry!), however you can spend a nice Saturday morning learning about AI in your chosen location !

Registration: https://www.meetup.com/metrotorontoug/events/267883977/

Full details here 👇👇👇

Intro and Welcome

Session. 10:00 – 10:10

Kick off of the amazing event

Introduction to AI and Cognitive Services for Developers and Information Workers

Session 1. 10:10 – 11:00

Speaker: Prashant G Bhoyar | Microsoft AI MVP

“Artificial Intelligence and Machine Learning are the new buzzwords in the industry. Microsoft’s vision is to make AI accessible to every enterprise, data scientist, developer, information worker, consumer and device everywhere in the world. AI has a big role to play in the enterprise space. The field of AI is progressing at a rapid pace. Without understanding the concepts behind these advanced technologies, developers and administrators will struggle to evaluate the potential impact of new tools and solutions. In this session, we will break down the concepts behind existing technologies, outline various tools available today, and discuss the direction of AI and ML for Developers. We will cover how developers, Power Users, and Information workers can take advantage of Microsoft’s AI and Cognitive Services offerings to build real-life enterprise solutions.

You will learn:
1) Overview of Microsoft AI Platform
2) What are the cognitive services?
3) What tools are available today?
4) How to use Cognitive Services to implement real-life business solutions?

Understanding Autonomous systems, Machine Teaching and Bonsai Platform

Session 2. 11:10 – 12:00

Speaker: Ivana Tilca | Microsoft AI MVP | 3XM Group Quality Manager

Innovations in AI are creating the next wave of disruption in industrial technology. Autonomous machines are more than an expansion of automated systems: They are an entirely new way to amplify human expertise. In this session you will take a look to how to speed the creation of AI-powered automation to improve production efficiency and reduce downtime – without requiring data scientists with Project Bonsai. We will also take a look at AirSim, a simulator for drones, cars and more, built on Unreal Engine. AirSim a platform for AI research to experiment with deep learning, computer vision and reinforcement learning algorithms for autonomous vehicles.

Knowledge Mining and MLOps

Session 3. 12:10 – 13:00

Speakers: Amol Mane, Meghana Madhusudhan and Niloofar Nayebi

Knowledge mining is an emerging discipline in artificial intelligence (AI) that uses a combination of intelligent services to quickly learn from vast amounts of information. It allows organizations to deeply understand and easily explore information, uncover hidden insights, and find relationships and patterns at scale. This platform allows to use machine learning model to address any specific business need. The life cycle of the models can be managed using MLOps. MLOps, or DevOps for machine learning, enables data science and IT teams to collaborate and increase the pace of model development and deployment via monitoring, validation, and governance of machine learning models. These two solutions combined can create a strong platform to extract, enrich, and expose any hidden data within any organization.
In this session you are going to learn about three main steps in KM platform:

– Ingestion

– Enrichment

– Exploration

And three main pipelines in MLOps:

– Continues Integration

– Continues Delivery

– Deployment

Let’s meet there !

Happy coding!

Greetings

El Bruno

#Event – Resources used during the session “Hack a drone, hack the camera and use AI” at the Global AI Tour, Lahore Pakistan, 2020

Buy Me A Coffee

Hi !

I had a great time early in the day with my Microsoft Student Partners from Lahore Pakistan, for the Global AI Tour. As usual, time for slides and code:

Slides

Code

https://github.com/elbruno/events/tree/main/2020%2011%2030%20Global%20AI%20Tour%20Pakistan

Resources

Happy coding!

Greetings

El Bruno

#Podcast – NTN 72 – Sobre escenarios #MultiCloud, Centros de Excelencia en #Cloud y mucho más.

Buy Me A Coffee

Buenas!

Hoy nos acompaña Richard, en uno de los episodios en los que más notas he tomado. La cantidad de información y experiencias que ha compartido han sido geniales.

Pido perdón por adelantado, ya que los primeros 6 o 7 minutos tenemos algunos problemas de sonido. Lo bueno es que ya sabemos cómo balancear la carga entre Amazon, Google, o Microsoft para tener una mejor calidad de grabación.

Otro gran episodio con nuevos amigos!

Speakers

  • Richard Steven Padilla Montaño  es Director of Cloud Architecture at Bankia (LinkedIn)
  • Juan Carlos Quijano Abad es Microsoft Certified Trainer, Arquitecto de Soluciones en Azure, Consultor independiente en implantación de DevOps (LinkedIn)
  • Bruno Capuano es Canada Innovation Lead at Avanade and Microsoft AI MVP (LinkedIn)

Ir a descargar

Happy coding!

Greetings

El Bruno

Use MS Lists and Power Automate to provision Private Channels in Teams

Norm Young

In my previous blog post I showed how easy it is to Use Microsoft Lists and Power Automate to provision MicrosoftTeams. In this blog post I will show how to extend the previous solution to include the creation of a Microsoft Teams Private Channel. At the time of writing there is no create Private channel action in Power Automate. This means we have to leverage the Graph API. I found creating the Graph API Connector complicated but it’s worth doing for all of the options that Graph brings to Power Automate.

There are three parts to this solution that range from easy to complex, at least in my experience:

  1. Create Graph API connection (complex)
  2. Update “Projects” list (easy)
  3. Create new Power Automate Flow (moderate)

Part 1 – Create Graph API Connector

From the Microsoft 365 admin center, open “Azure Active Directory”.

Azure Active Directory

Create a new app registration by selecting…

View original post 905 more words

#dotnet – Packaging my WinForm #OpenCV and #net5 app in a one-self contained file (Thanks @shanselman!)

Buy Me A Coffee

Hi !

A couple of days ago, Scott Hanselman posted about a super cool feature (see references):

… you can make a single EXE that will run on any Windows Machine in two commands

How to make a WinForms app with .NET 5 entirely from the command line and publish as one self-contained file

and I was wondering, how this would work with my Winform tests with OpenCV. So, I pick up my Face Recognition using DNN app and decided to give it a try.

When I just build my Face Recognition using DNN app, it creates an output with

  • 22 files
  • 9 folders

Of course, including the Caffe model and the proto file. And I know, I can remove some unnecessary files, this is just for reference.

opencv net5 winform app with all the included files

Back to Scott’s post, I decided to clean this and run the publish command

dotnet publish -r win-x64 /p:PublishSingleFile=true /p:IncludeNativeLibrariesForSelfExtract=true

And the output was amazing:

  • My application (exe) self-contained file
  • My ML Models, prototxt and caffe model
  • OpenCVSharp dependency for Windows
opencv net5 winform app self contained file

This is great ! But wait, there is still an option here to PACK everything, that’s mean add the parameter IncludeAllContentForSelfExtract=True

dotnet publish -r win-x64 /p:PublishSingleFile=true /p:IncludeNativeLibrariesForSelfExtract=true /p:IncludeAllContentForSelfExtract=True

And, it really packs everything. However, it does not work in my default code. OpenCVSharp load the models from a physical location, and using the IncludeNativeLibrariesForSelfExtract option, the caffe model and prototxt are embedded as resources.

_netPose = CvDnn.ReadNetFromCaffe(PoseProtoTxt, PoseCaffeModel);
_netPose.SetPreferableBackend(Net.Backend.OPENCV);

There should be a trick here, for sure. I’ll spend some time to figure out, how we can load models directly from a single package app file!

Happy coding!

Greetings

El Bruno

References

#dotnet – Pose detection from the 🎦 camera feed using #OpenCV and #net5. Home-made #kinect!

Buy Me A Coffee

Hi !

LearnOpenCV is an amazing resource to learn about OpenCV. And, it has lot of scenarios of real life problem solved with OpenCV. Most of the samples are in C++ or Python, so I decided to pick one related to pose estimation, and using .Net 5 in a Winforms App, build something like this:

net5 opencv pose estimation on real camera feed

The main model is OpenPose (see references). The model is amazing, and also works fast: ~1 FPS. There are other variations here, detecting face, Body, Foot, Face, and Hands Estimation, and more. I’ll try and share some of the other models usage in C# in next posts.

Now as usual, a huge code snippet with only the frame recognition and processing to detect the body joints.

private void CaptureCameraCallback()
{
    while (true)
    {
        if (!_run) continue;
        var startTime = DateTime.Now;

        _capture.Read(_image);
        if (_image.Empty()) return;
        var imageRes = new Mat();
        Cv2.Resize(_image, imageRes, new Size(320, 240));
        if (_detectPose)
        {

            var frameWidth = imageRes.Cols;
            var frameHeight = imageRes.Rows;

            const int inWidth = 368;
            const int inHeight = 368;

            // Convert Mat to batch of images
            using var inpBlob = CvDnn.BlobFromImage(imageRes, 1.0 / 255, new Size(inWidth, inHeight), new Scalar(0, 0, 0), false, false);

            _netPose.SetInput(inpBlob);

            using var output = _netPose.Forward();
            var H = output.Size(2);
            var W = output.Size(3);

            var points = new List<Point>();

            for (var n = 0; n < nPoints; n++)
            {
                // Probability map of corresponding body's part.
                using var probMap = new Mat(H, W, MatType.CV_32F, output.Ptr(0, n));
                var p = new Point2f(-1, -1);

                Cv2.MinMaxLoc(probMap, out _, out var maxVal, out _, out var maxLoc);

                var x = (frameWidth * maxLoc.X) / W;
                var y = (frameHeight * maxLoc.Y) / H;

                if (maxVal > thresh)
                {
                    p = maxLoc;
                    p.X *= (float)frameWidth / W;
                    p.Y *= (float)frameHeight / H;

                    Cv2.Circle(imageRes, (int)p.X, (int)p.Y, 8, Scalar.Azure, -1);
                    //Cv2.PutText(imageRes, Cv2.Format(n), new Point((int)p.X, (int)p.Y), HersheyFonts.HersheyComplex, 1, new Scalar(0, 0, 255), 1);
                }

                points.Add((Point)p);
            }

            WriteTextSafe(@$"Joints {nPoints} found");

            var nPairs = 14; //(POSE_PAIRS).Length / POSE_PAIRS[0].Length;

            for (var n = 0; n < nPairs; n++)
            {
                // lookup 2 connected body/hand parts
                var partA = points[posePairs[n][0]];
                var partB = points[posePairs[n][1]];
                if (partA.X <= 0 || partA.Y <= 0 || partB.X <= 0 || partB.Y <= 0)
                    continue;
                Cv2.Line(imageRes, partA, partB, new Scalar(0, 255, 255), 8);
                Cv2.Circle(imageRes, partA.X, partA.Y, 8, new Scalar(0, 0, 255), -1);
                Cv2.Circle(imageRes, partB.X, partB.Y, 8, new Scalar(0, 0, 255), -1);
            }

        }
// rest of the code to calc FPS and display the image
    }
}

Super fun ! and check the references for the model and support files download location.

Happy coding!

Greetings

El Bruno

References

#dotnet – GoogleNet detection from the 🎦 camera feed using #OpenCV and #net5. Bonus: C++ to C# time!

Buy Me A Coffee

Hi !

So I was browsing in the OpenCV documentation and I find a nice sample that uses opencv_dnn module for image classification by using GoogLeNet trained network from Caffe model zoo.

So I give it a try, and get a decent .Net 5 Winforms App running at ~30 FPS.

opencv net5 load and analyze camera frames with googlenet

The model was trained with 1000 classes, and once you get the main focus on the camera it work great with objects like a machine, mug, bottle, etc. There is a nice amount of code here, and because the DNN analysis is performed in a separated thread, I need to update the label details using PInvoke functions.

using System;
using System.IO;
using System.Linq;
using System.Threading;
using System.Windows.Forms;
using OpenCvSharp;
using OpenCvSharp.Dnn;
using OpenCvSharp.Extensions;
using Point = OpenCvSharp.Point;
using Size = OpenCvSharp.Size;
namespace Demo11_WinFormGoogleNet
{
public partial class Form1 : Form
{
private bool _run = true;
private bool _useGoogleNet = false;
private VideoCapture _capture;
private Mat _image;
private Thread _cameraThread;
private bool _fps = false;
private Net _netGoogleNet;
private string[] _classNames;
const string ProtoTxt = @"models\bvlc_googlenet.prototxt";
const string CaffeModel = @"models\bvlc_googlenet.caffemodel";
const string SynsetWords = @"models\synset_words.txt";
private delegate void SafeCallDelegate(string text);
public Form1()
{
InitializeComponent();
Load += Form1_Load;
Closed += Form1_Closed;
}
private void Form1_Closed(object sender, EventArgs e)
{
_cameraThread.Interrupt();
_capture.Release();
}
private void btnStart_Click(object sender, EventArgs e)
{
_run = true;
}
private void btnStop_Click(object sender, EventArgs e)
{
_run = false;
}
private void btnGoogleNet_Click(object sender, EventArgs e)
{
_useGoogleNet = !_useGoogleNet;
}
private void buttonFPS_Click(object sender, EventArgs e)
{
_fps = !_fps;
}
private void Form1_Load(object sender, EventArgs e)
{
_classNames = File.ReadAllLines(SynsetWords)
.Select(line => line.Split(' ').Last())
.ToArray();
_netGoogleNet = CvDnn.ReadNetFromCaffe(ProtoTxt, CaffeModel);
_capture = new VideoCapture(0);
_image = new Mat();
_cameraThread = new Thread(new ThreadStart(CaptureCameraCallback));
_cameraThread.Start();
}
private void CaptureCameraCallback()
{
while (true)
{
if (!_run) continue;
var startTime = DateTime.Now;
_capture.Read(_image);
if (_image.Empty()) return;
var imageRes = new Mat();
Cv2.Resize(_image, imageRes, new Size(320, 240));
if (_useGoogleNet)
{
// Convert Mat to batch of images
using var inputBlob = CvDnn.BlobFromImage(imageRes, 1, new Size(224, 224), new Scalar(104, 117, 123));
_netGoogleNet.SetInput(inputBlob, "data");
using var prob = _netGoogleNet.Forward("prob");
// find the best class
GetMaxClass(prob, out int classId, out double classProb);
var msg = @$"Best class: #{classId} '{_classNames[classId]}' – Probability: {classProb:P2}";
// display output
WriteTextSafe(msg);
}
if (_fps)
{
var diff = DateTime.Now startTime;
var fpsInfo = $"FPS: Nan";
if (diff.Milliseconds > 0)
{
var fpsVal = 1.0 / diff.Milliseconds * 1000;
fpsInfo = $"FPS: {fpsVal:00}";
}
Cv2.PutText(imageRes, fpsInfo, new Point(10, 20), HersheyFonts.HersheyComplexSmall, 1, Scalar.White);
}
var bmpWebCam = BitmapConverter.ToBitmap(imageRes);
pictureBoxWebCam.Image = bmpWebCam;
}
}
private void WriteTextSafe(string text)
{
if (lblOutputAnalysis.InvokeRequired)
{
var d = new SafeCallDelegate(WriteTextSafe);
lblOutputAnalysis.Invoke(d, new object[] { text });
}
else
{
lblOutputAnalysis.Text = text;
}
}
private static void GetMaxClass(Mat probBlob, out int classId, out double classProb)
{
// reshape the blob to 1×1000 matrix
using (var probMat = probBlob.Reshape(1, 1))
{
Cv2.MinMaxLoc(probMat, out _, out classProb, out _, out var classNumber);
classId = classNumber.X;
}
}
}
}

Super fun ! and check the references for the model and support files download location.

Happy coding!

Greetings

El Bruno

References

#dotnet – Age and Gender estimation from the 🎦 camera feed using #OpenCV and #net5

Buy Me A Coffee

Hi !

Face detected, so next step is to use some prebuild models to perform additional actions: like estimate the Age of a face, and also the Gender. In order to do this, I downloaded a couple of models from here.

Disclaimer: these models are just sample models, do not use them in production. These model does not covers all the necessary scenarios for a real implementation.

And the final winform app is kind of cute!

Below you can find the complete Form1 source code, before let’s take a look at the sample analyzing a magazine photo.

2020-11-23_16-37-43 opencv net 5 detecting multiple faces

So let’s analyze the code. For this sample, we load 3 models to work with age, faces and gender.

// # detect faces, age and gender using models from https://github.com/spmallick/learnopencv/tree/08e61fe80b8c0244cc4029ac11e44cd0fbb008c3/AgeGender
const string faceProto = "models/deploy.prototxt";
const string faceModel = "models/res10_300x300_ssd_iter_140000_fp16.caffemodel";
const string ageProto = @"models/age_deploy.prototxt";
const string ageModel = @"models/age_net.caffemodel";
const string genderProto = @"models/gender_deploy.prototxt";
const string genderModel = @"models/gender_net.caffemodel";
_ageNet = CvDnn.ReadNetFromCaffe(ageProto, ageModel);
_genderNet = CvDnn.ReadNetFromCaffe(genderProto, genderModel);
_faceNet = CvDnn.ReadNetFromCaffe(faceProto, faceModel);

Once the models are loaded, in the loop to analyze camera frames, we perform face detection, and then age and gender estimation.

while (true)
{
    if (!_run) continue;
    var startTime = DateTime.Now;

    _capture.Read(_image);
    if (_image.Empty()) return;
    var imageRes = new Mat();
    Cv2.Resize(_image, imageRes, new Size(320, 240));
    var newImage = imageRes.Clone();

    if (_doFaceDetection) DetectFaces(newImage, imageRes);

    if (_fps) CalculateFps(startTime, newImage);

    var bmpWebCam = BitmapConverter.ToBitmap(imageRes);
    var bmpEffect = BitmapConverter.ToBitmap(newImage);

    pictureBoxWebCam.Image = bmpWebCam;
    pictureBoxEffect.Image = bmpEffect;
}

For each detected face, we perform the age and gender estimation. In order to do this, we crop the detected face (plus a padding), and perform the estimation on the cropped image.

private void AnalyzeAgeAndGender(int x1, int y1, int x2, int y2, Mat imageRes, Mat newImage)
{
    // get face frame
    var x = x1 - Padding;
    var y = y1 - Padding;
    var w = (x2 - x1) + Padding * 3;
    var h = (y2 - y1) + Padding * 3;
    Rect roiNew = new Rect(x, y, w, h);
    var face = imageRes[roi: roiNew];

    var meanValues = new Scalar(78.4263377603, 87.7689143744, 114.895847746);
    var blobGender = CvDnn.BlobFromImage(face, 1.0, new Size(227, 227), mean: meanValues,
        swapRB: false);
    _genderNet.SetInput(blobGender);
    var genderPreds = _genderNet.Forward();

    GetMaxClass(genderPreds, out int classId, out double classProbGender);
    var gender = _genderList[classId];

    _ageNet.SetInput(blobGender);
    var agePreds = _ageNet.Forward();
    GetMaxClass(agePreds, out int classIdAge, out double classProbAge);
    var age = _ageList[classIdAge];

    var label = $"{gender},{age}";
    Cv2.PutText(newImage, label, new Point(x1 - 10, y2 + 20), HersheyFonts.HersheyComplexSmall, 1, Scalar.Yellow, 1);
}

private void GetMaxClass(Mat probBlob, out int classId, out double classProb)
{
    // reshape the blob to 1x1000 matrix
    using var probMat = probBlob.Reshape(1, 1);
    Cv2.MinMaxLoc(probMat, out _, out classProb, out _, out var classNumber);
    classId = classNumber.X;
    Debug.WriteLine($"X: {classNumber.X} - Y: {classNumber.Y} ");
}

It’s also important to mention to the GetMaxClass() function, to retrieve the best detected element in the prob result.

And the complete source code:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Threading;
using System.Windows.Forms;
using OpenCvSharp;
using OpenCvSharp.Dnn;
using OpenCvSharp.Extensions;
using Point = OpenCvSharp.Point;
using Size = OpenCvSharp.Size;
namespace Demo10_WinFormAgeAndGender
{
public partial class Form1 : Form
{
private bool _run = true;
private bool _doFaceDetection = true;
private bool _doAgeGender = false;
private VideoCapture _capture;
private Mat _image;
private Thread _cameraThread;
private bool _fps = false;
private Net _faceNet;
private Net _ageNet;
private Net _genderNet;
private const int LineThickness = 2;
private const int Padding = 10;
private readonly List<string> _genderList = new List<string> { "Male", "Female" };
private readonly List<string> _ageList = new List<string> { "(0-2)", "(4-6)", "(8-12)", "(15-20)", "(25-32)", "(38-43)", "(48-53)", "(60-100)" };
public Form1()
{
InitializeComponent();
Load += Form1_Load;
Closed += Form1_Closed;
}
private void Form1_Closed(object sender, EventArgs e)
{
_cameraThread.Interrupt();
_capture.Release();
}
private void btnStart_Click(object sender, EventArgs e)
{
_run = true;
}
private void btnStop_Click(object sender, EventArgs e)
{
_run = false;
}
private void btnFDDNN_Click(object sender, EventArgs e)
{
_doFaceDetection = !_doFaceDetection;
}
private void buttonFPS_Click(object sender, EventArgs e)
{
_fps = !_fps;
}
private void btnAgeGender_Click(object sender, EventArgs e)
{
_doAgeGender = !_doAgeGender;
}
private void Form1_Load(object sender, EventArgs e)
{
// # detect faces, age and gender using models from https://github.com/spmallick/learnopencv/tree/08e61fe80b8c0244cc4029ac11e44cd0fbb008c3/AgeGender
const string faceProto = "models/deploy.prototxt";
const string faceModel = "models/res10_300x300_ssd_iter_140000_fp16.caffemodel";
const string ageProto = @"models/age_deploy.prototxt";
const string ageModel = @"models/age_net.caffemodel";
const string genderProto = @"models/gender_deploy.prototxt";
const string genderModel = @"models/gender_net.caffemodel";
_ageNet = CvDnn.ReadNetFromCaffe(ageProto, ageModel);
_genderNet = CvDnn.ReadNetFromCaffe(genderProto, genderModel);
_faceNet = CvDnn.ReadNetFromCaffe(faceProto, faceModel);
_capture = new VideoCapture(0);
_image = new Mat();
_cameraThread = new Thread(new ThreadStart(CaptureCameraCallback));
_cameraThread.Start();
}
private void CaptureCameraCallback()
{
while (true)
{
if (!_run) continue;
var startTime = DateTime.Now;
_capture.Read(_image);
if (_image.Empty()) return;
var imageRes = new Mat();
Cv2.Resize(_image, imageRes, new Size(320, 240));
var newImage = imageRes.Clone();
if (_doFaceDetection) DetectFaces(newImage, imageRes);
if (_fps) CalculateFps(startTime, newImage);
var bmpWebCam = BitmapConverter.ToBitmap(imageRes);
var bmpEffect = BitmapConverter.ToBitmap(newImage);
pictureBoxWebCam.Image = bmpWebCam;
pictureBoxEffect.Image = bmpEffect;
}
}
private static void CalculateFps(DateTime startTime, Mat imageRes)
{
var diff = DateTime.Now startTime;
var fpsInfo = $"FPS: Nan";
if (diff.Milliseconds > 0)
{
var fpsVal = 1.0 / diff.Milliseconds * 1000;
fpsInfo = $"FPS: {fpsVal:00}";
}
Cv2.PutText(imageRes, fpsInfo, new Point(10, 20), HersheyFonts.HersheyComplexSmall, 1, Scalar.White);
}
private void DetectFaces(Mat newImage, Mat imageRes)
{
// DNN
int frameHeight = newImage.Rows;
int frameWidth = newImage.Cols;
using var blob = CvDnn.BlobFromImage(newImage, 1.0, new Size(300, 300), new Scalar(104, 117, 123), false, false);
_faceNet.SetInput(blob, "data");
using var detection = _faceNet.Forward("detection_out");
using var detectionMat = new Mat(detection.Size(2), detection.Size(3), MatType.CV_32F, detection.Ptr(0));
for (int i = 0; i < detectionMat.Rows; i++)
{
float confidence = detectionMat.At<float>(i, 2);
if (confidence > 0.7)
{
int x1 = (int)(detectionMat.At<float>(i, 3) * frameWidth);
int y1 = (int)(detectionMat.At<float>(i, 4) * frameHeight);
int x2 = (int)(detectionMat.At<float>(i, 5) * frameWidth);
int y2 = (int)(detectionMat.At<float>(i, 6) * frameHeight);
Cv2.Rectangle(newImage, new Point(x1, y1), new Point(x2, y2), Scalar.Green, LineThickness);
if (_doAgeGender)
AnalyzeAgeAndGender(x1, y1, x2, y2, imageRes, newImage);
}
}
}
private void AnalyzeAgeAndGender(int x1, int y1, int x2, int y2, Mat imageRes, Mat newImage)
{
// get face frame
var x = x1 Padding;
var y = y1 Padding;
var w = (x2 x1) + Padding * 3;
var h = (y2 y1) + Padding * 3;
Rect roiNew = new Rect(x, y, w, h);
var face = imageRes[roi: roiNew];
var meanValues = new Scalar(78.4263377603, 87.7689143744, 114.895847746);
var blobGender = CvDnn.BlobFromImage(face, 1.0, new Size(227, 227), mean: meanValues,
swapRB: false);
_genderNet.SetInput(blobGender);
var genderPreds = _genderNet.Forward();
GetMaxClass(genderPreds, out int classId, out double classProbGender);
var gender = _genderList[classId];
_ageNet.SetInput(blobGender);
var agePreds = _ageNet.Forward();
GetMaxClass(agePreds, out int classIdAge, out double classProbAge);
var age = _ageList[classIdAge];
var label = $"{gender},{age}";
Cv2.PutText(newImage, label, new Point(x1 10, y2 + 20), HersheyFonts.HersheyComplexSmall, 1, Scalar.Yellow, 1);
}
private void GetMaxClass(Mat probBlob, out int classId, out double classProb)
{
// reshape the blob to 1×1000 matrix
using var probMat = probBlob.Reshape(1, 1);
Cv2.MinMaxLoc(probMat, out _, out classProb, out _, out var classNumber);
classId = classNumber.X;
Debug.WriteLine($"X: {classNumber.X} – Y: {classNumber.Y} ");
}
}
}

Happy coding!

Greetings

El Bruno

References

Understanding Autonomous systems, Machine Teaching and Bonsai Platform

Throughout history, managing the factory environment during a pandemic has been a difficult task. Today, COVID-19 has forced industries worldwide into an era of adoption. In what seems like an instant, entire business lines have adopted remote operating tools, digital collaboration resources, and virtual working environments.

This normalization of next-generation technologies has opened the eyes of many frontline workers and executives to the power of tools like artificial intelligence and cloud collaboration.

Autonomous systems means “human independence”?

At large, the concept of autonomy in technology tends to be linked with independence—a self-driving car, for instance, is a great example of an autonomous system, but one that is incredibly complex. At the core of the autonomous system, expertly trained AI complements human experience by recognizing and adapting to a variety of situations to make the best decision possible. By adding a layer of active education and human oversight, autonomous systems ascend…

View original post 750 more words

#Podcast – NTN 71 – Una historia de ERPs, Dynamics 365, Dynamics AX, Finance and Operations y un podcast amigo.

Buy Me A Coffee

Buenas!

Siempre es un placer pasar un rato con otros podcasters, y en este caso, tengo que agradecer que como bonus he aprendido mucho sobre otras áreas de Dynamics 365: Dynamics AX y el clásico X++. Y el tema es interesante, porque por lo general las personas que trabajar en el antes conocido como Axapta, suelen tener una combinación genial entre conocimientos de negocio y conocimientos técnicos. En el camino cayeron temas como X++ o C#, o los 2; despliegues automatizados y mucho más.

Y claro, también aprovechamos para hablar un poco de la afición de ser podcaster. Adria y Juan Antonio mantienen el genial Xpp.dev podcast (https://xpp.dev/), os recomiendo darse una vuelta por allí para conocerlos un poco más.

Un gran episodio con grandes amigos!

Speakers

  • Juan Antonio Tomás es Microsoft BizApps MVP | MCT | Dyn365 F&O Technical Lead at AXAZURE | jatomas.com | xpp.dev (LinkedIn)
  • Adrià Ariste Santacreu  trabaja en Dynamics 365 @ Axazure | Microsoft BizApps MVP | ariste.info | xpp.dev | MCT (LinkedIn)
  • Juan Carlos Quijano Abad es Microsoft Certified Trainer, Arquitecto de Soluciones en Azure, Consultor independiente en implantación de DevOps (LinkedIn)
  • Bruno Capuano es Canada Innovation Lead at Avanade and Microsoft AI MVP (LinkedIn)

Happy coding!

Greetings

El Bruno