#Humor – Not good at cropping …

Buy Me A Coffee

Source: Internet

Happy coding!

Greetings

El Bruno

All you need to know to pass AZ-900 Azure Fundamentals June 2020

Super useful tips from Ivana !

This week I took and passed the exam AZ-900 Azure Fundamentals. This exam is the opportunity to prove your knowledge of cloud concepts, core Azure services, Azure pricing, SLA, lifecycle, fundamentals of cloud security, privacy, compliance and trust. In addition you should be familiar with concepts of networking, storage, compute, application support and application development.

The exam content was updated past May, which means it also covers new services releases like cognitive services among others.

  • The content of this exam was updated on May 28, 2020. Please download the exam skills outline below to see what changed.
  • Describe cloud concepts (15-20%)
  • Describe core Azure services (30-35%)
  • Describe security, privacy, compliance, and trust (25-30%)
  • Describe Azure pricing Service Level Agreements, and Lifecycles (20-25%)

So lets get to the point… What resources did I use to pass the exam?

1 – Microsoft Learn

https://docs.microsoft.com/en-us/learn/ Microsoft Learn is a free online learning platform…

View original post 350 more words

#Podcast – NTN 55 – Machine Learning y datos, muchos datos con Miguel Egea @miguelEgea @jc_quijano,

Buy Me A Coffee

Buenas!

Hoy tenemos la suerte de hablar con Miguel Egea (@miguelEgea) y Juan Carlos Quijano (@jc_quijano) sobre Machine Learning, y datos, muchos datos. Miguel es uno de los principales referentes en lo que a datos se refiere, así que tener la oportunidad de hablar con el de varios temas, es todo un privilegio!

Miguel Egea es Technical Advisor en Solid Quality Mentors, Juan Quijano es Microsoft Certified Trainer, Arquitecto de Soluciones en Azure, Consultor independiente en implantación de DevOps.

Ir a descargar

Happy coding!

Greetings

El Bruno

#Event – Resources used during the #AI session on the Best of Build 2020 – Canada Community Edition event

Buy Me A Coffee

Hi !

After an amazing event last Saturday, where we shared our best choices after Microsoft Build, now it’s time to share the resources used in the Artificial Intellitence session.

Slides

Resources

Virtual Stage

AI KeyNote

AI Super Computer

Cognitive Services

Project Bonsai

Happy coding!

Greetings

El Bruno

#Event – Best of Build 2020 – Canada Community Edition

Buy Me A Coffee

Hi !

Build 2020 took place on May 19-20, 2020 – a 48 hour marathon of 600+ sessions, exciting announcements and a lot more. Our community leaders, MVPs, Microsoft Cloud Advocates and PMMs are getting together to share some of the excitement with you – do join us as we plan this Canada wide virtual event – delivered by some of the best we have, from East to the West of Canada!

Link to join: https://aka.ms/Best_Of_Build_2020_Teams_Link

Agenda

  • 11:00 – 11:30
    Enabling Remote Developer Productivity using Visual Studio Codespace + Liveshare
    by Anthony Chu (Program Manager, Azure Functions – Microsoft) and Jen Looper (Cloud Developer Advocate Lead, Microsoft)
  • 11:30 – 12:00
    Azure Static web app, Automatic CI/CD with GitHub Actions, Backend Api with Azure Functions
    by Callon Campbell(Solution Architect, MVP) and Jay Gordon (Cloud Advocate, Azure Modern Ops – Microsoft)
  • 12:00 – 12:30
    IoT announcements
    by Glen Colpaert(Founder Zure Belgium, MVP) and Ehsan Eskandari (Sr, Technical Consultant, MVP)
  • 12:30 – 1:00
    Microsoft Power Apps announcements
    by Eli Robillard (Consulting Strategist, MVP)
  • 1:00 – 1:30
    Developer Technologies: C# announcements
    by Anthony Giretti (MVP) and Frank Boucher (Cloud Advocate, Microsoft) and Brandon Minnick (Cloud Advocate, Microsoft)
  • 1:30 – 2:00
    Kubernetes announcements
    by Roy Kim(Solution Architect, MVP)
  • 2:00 – 2:30
    AI announcements
    by Bruno Capuano (Innovation Lead, Avanade & MVP) and Frank Boucher (Cloud Advocate, Microsoft)
  • 2:30 – 3:00
    Power BI announcements
    by Ashraf Ghonaim (Strategic Management – City of Toronto, MVP)

Link to join: https://aka.ms/Best_Of_Build_2020_Teams_Link

Happy coding!

Greetings

El Bruno

#ComputerVision – Object Detection with #YoloV4 (work in progress …) and let’s think about ethics in Computer Vision

Buy Me A Coffee

Hi !

So after yesterday post where I used YoloV3 and MobileNetSSD, I also remember that we have YoloV4 released on April. I managed to make my code work with YoloV4 with some poor FPS results.

If you are interested on the code, let me know and I’ll be happy to share the code. It’s still a mess, working but a mess.

Abstract: There are a huge number of features which are said to improve Convolutional Neural Network (CNN) accuracy. Practical testing of combinations of such features on large datasets, and theoretical justification of the result, is required. Some features operate on certain models exclusively and for certain problems exclusively, or only for small-scale datasets; while some features, such as batch-normalization and residual-connections, are applicable to the majority of models, tasks, and datasets. We assume that such universal features include Weighted-Residual-Connections (WRC), Cross-Stage-Partial-connections (CSP), Cross mini-Batch Normalization (CmBN), Self-adversarial-training (SAT) and Mish-activation. We use new features: WRC, CSP, CmBN, SAT, Mish activation, Mosaic data augmentation, CmBN, DropBlock regularization, and CIoU loss, and combine some of them to achieve state-of-the-art results: 43.5% AP (65.7% AP50) for the MS COCO dataset at a realtime speed of ~65 FPS on Tesla V100.

However, what I also learned is part of the story behind YoloV4. This is very relevant to our days. The next 10 min video, really nails an explanation about how YoloV4 works.

YOLO History

YOLO was developed by Joseph Redmon. It was 1st presented in 2016, and it was key for object recognition research. This led to better and faster Computer Vision algorithms.

The latest version, YOLO v4 is currently developed by three developers:

  • Alexey Bochkovskiy
  • Chien-Yao Wang
  • Hong-Yuan Mark Liao

No Joseph Redmon in YOLOv4?

Joseph Redmon quit developing YOLO v4 because of the potential misuse of his tech. He recently announced that he would stop doing computer vision research because of the military and ethical issues….

So, why this is important? It’s all about how we use this technology. There are amazing advances in the Computer Vision area, but we also are lacking some regulation about how to use this.

IBM announced that they will no longer offer facial recognition software

2 days ago, IBM announced that they will no longer offer facial recognition software. The Verge wrote an amazing article about this (see references). This sentences really hit a point regarding Ethics and more:

IBM will no longer offer general purpose facial recognition or analysis software, IBM CEO Arvind Krishna said in a letter to Congress today. The company will also no longer develop or research the technology, IBM tells The Verge. Krishna addressed the letter to Sens. Cory Booker (D-NJ) and Kamala Harris (D-CA) and Reps. Karen Bass (D-CA), Hakeem Jeffries (D-NY), and Jerrold Nadler (D-NY).

“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in the letter. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” Facial recognition software has come under scrutiny for issues with racial bias and privacy concerns

Facial recognition software has improved greatly over the last decade thanks to advances in artificial intelligence. At the same time, the technology — because it is often provided by private companies with little regulation or federal oversight — has been shown to suffer from bias along lines of age, race, and ethnicity, which can make the tools unreliable for law enforcement and security and ripe for potential civil rights abuses.

The Verge, IBM will no longer offer, develop, or research facial recognition technology

There it is, think about this.

Happy coding!

Greetings

El Bruno

Resources

#ComputerVision – Object Detection with #YoloV3 and #MobileNetSSD

Buy Me A Coffee

Hi !

I have a ToDo in my list, to add some new drone demos. In order to do this, I was planning to perform some tests with pretrained models and use them. The 1st 2 in my list are Yolo and MobileNetSSD (see references).

YoloV3

Let’s start with one of the most popular object detection tools, YOLOV3. The official definition:

YOLO (You Only Look Once) is a real-time object detection algorithm that is a single deep convolutional neural network that splits the input image into a set of grid cells, so unlike image classification or face detection, each grid cell in YOLO algorithm will have an associated vector in the output that tells us:

If an object exists in that grid cell.

The class of that object (i.e label).

The predicted bounding box for that object (location).

YoloV3

I pickup some sample code from GitHub repositories and, as usual, from PyImageSearch (see references), and I created a real-time object detection scenario using my webcam as the input feed for YoloV3.

Object Detection live sample with Yolo V3

The final demo, works great; we can use the 80 classes that YoloV3 supports and it’s working at ~2FPS.

MobileNetSSD

Another very popular Object Detection Tool is MobileNetSSD. And, the important part here is SSD, Single Shot Detection. Let’s go to the definition:

Single Shot object detection or SSD takes one single shot to detect multiple objects within the image. As you can see in the above image we are detecting coffee, iPhone, notebook, laptop and glasses at the same time.

It composes of two parts

– Extract feature maps, and

– Apply convolution filter to detect objects

SSD is developed by Google researcher teams to main the balance between the two object detection methods which are YOLO and RCNN.

There are specifically two models of SSD are available

– SSD300: In this model the input size is fixed to 300×300. It is used in lower resolution images, faster processing speed and it is less accurate than SSD512

– SSD512: In this model the input size is fixed to 500×500. It is used in higher resolution images and it is more accurate than other models.

SSD is faster than R-CNN because in R-CNN we need two shots one for generating region proposals and one for detecting objects whereas in SSD It can be done in a single shot.

The MobileNet SSD method was first trained on the COCO dataset and was then fine-tuned on PASCAL VOC reaching 72.7% mAP (mean average precision).

For this demo, I’ll use the SSD300 model. Even, if the drone support better quality images and the SSD500 model works with bigger images, SSD300 is a good fit for this.

bject Detection with MobileNetSSD

This sample works at ~20FPS, and this triggered my curiosity to learn more about the 2nd one. I started to read a lot about this, and found some amazing articles and papers. At the end, if you are interested on my personal take, I really enjoyed this 30 min video about the different detectors side-by-side

Source Code

YoloV3 webcam live object detection

MobileNetSSD webcam live object detection

Happy coding!

Greetings

El Bruno

Resources

#Coding4Fun – How to control your #drone with 20 lines of code! (21/N)

Buy Me A Coffee

Hi !

In my post series I already wrote about how to detect faces. We can do this with a camera and OpenCV. However, a drone can also be moved on command, so let’s write some lines to detect a face, and calculate the orientation and distance of the detected face from the center camera of the camera.

In order to do this, 1st let’s draw a grid in the camera frame, and once a face is detected, let’s show the distance and orientation from the center.

face detected on camera and calculate position from center

Let’s start with a Grid. The idea is to create a 3×3 grid in the camera frame, and use the center cell as reference for the detected objects. The code to create a 3×3 grid is this one:

def displayGrid(frame):
    # Add a 3x3 Grid
    cv2.line(frame, (int(camera_Width/2)-centerZone, 0)     , (int(camera_Width/2)-centerZone, camera_Heigth)    , lineColor, lineThickness)
    cv2.line(frame, (int(camera_Width/2)+centerZone, 0)     , (int(camera_Width/2)+centerZone, camera_Heigth)    , lineColor, lineThickness)
    cv2.line(frame, (0, int(camera_Heigth / 2) - centerZone), (camera_Width, int(camera_Heigth / 2) - centerZone), lineColor, lineThickness)
    cv2.line(frame, (0, int(camera_Heigth / 2) + centerZone), (camera_Width, int(camera_Heigth / 2) + centerZone), lineColor, lineThickness)

# Camera Settings
camera_Width  = 1024 # 1280 # 640
camera_Heigth = 780  # 960  # 480
centerZone    = 100

# GridLine color green and thickness
lineColor = (0, 255, 0) 
lineThickness = 2

We use the line() function on OpenCV, and do some calculations to get the starting and endpoint for the 4 lines for the grid: 2 vertical lines and 2 horizontal lines. For this demo, I’ll implement this in my main webcam.

drone 3x3 grid in the camera frame

Based on my face detection samples and other samples in GitHub (see references), now I’ll calculate the position of the detected face (with x, y, h, w) from the center of the camera:

def calculatePositionForDetectedFace(frame, x, y, h , w):
    # calculate direction and relative position of the face
    cx = int(x + (w / 2))  # Center X of the Face
    cy = int(y + (h / 2))  # Center Y of the Face

    if (cx <int(camera_Width/2) - centerZone):
        cv2.putText  (frame, " LEFT " , (20, 50), cv2.FONT_HERSHEY_COMPLEX, 1 , colorGreen, 2)
        dir = 1
    elif (cx > int(camera_Width / 2) + centerZone):
        cv2.putText(frame, " RIGHT ", (20, 50), cv2.FONT_HERSHEY_COMPLEX,1,colorGreen, 3)
        dir = 2
    elif (cy < int(camera_Heigth / 2) - centerZone):
        cv2.putText(frame, " UP ", (20, 50), cv2.FONT_HERSHEY_COMPLEX,1,colorGreen, 3)
        dir = 3
    elif (cy > int(camera_Heigth / 2) + centerZone):
        cv2.putText(frame, " DOWN ", (20, 50), cv2.FONT_HERSHEY_COMPLEX, 1,colorGreen, 3)
        dir = 4
    else: dir=0

    # display detected face frame, line from center and direction to go
    cv2.line     (frame, (int(camera_Width/2),int(camera_Heigth/2)), (cx,cy), colorRed, messageThickness)
    cv2.rectangle(frame, (x, y), (x + w, y + h), colorBlue, messageThickness)
    cv2.putText  (frame, str(int(x)) + " " + str(int(y)), (x - 20, y - 45), cv2.FONT_HERSHEY_COMPLEX,0.7, colorRed, messageThickness)

The output is similar to this one

And now with the base code completed, it’s time to add this logic to the drone samples !

Bonus: the complete code.

Happy coding!

Greetings

El Bruno

References

#SomeGoodNews: I’ll be at the Talk 4 Kids 2020 event. #t4k2020 @talks4kids

Buy Me A Coffee

Hi !

Let’s share Some Good News.

Before moving to Canada, my last event in Spain was probably one of the the best one ever: with a couple of friends we hosted the Talk 4 Kids.

Talk 4 Kids

The event is focused on technology, with amazing speakers, and the funds raised will go entirely to an NGO or foundation, which has programs and actions focused on children (see Theodora Foundation below).

In 2015 we set ourselves a goal of € 1,500.00. If we managed to reach this figure, all the speakers would give their talks in disguise. Luckily for everyone and unfortunately for the speakers, we raised € 1,895, dressed up …. you can still find these images / photos in internet.

This year, we will have an online version, with tons of surprises and you have 2 ways to collaborate / donate:

The speaker lineup is almost ready, and the agenda will be released soon. So, if you want to share some smiles with kids, this is an amazing opportunity to be part of something amazing.

Theodora Foundation

At the Theodora Children’s Charity we believe that all children deserve laughter. The mission at the heart of our work is to contribute to the wellbeing of children in hospitals, hospices and specialist care centres by providing visits from professional performers who have been trained to work in a medical environment.

Our Giggle Doctors visit children in hospitals, hospices and specialist care centres across England. By combining music, play, magic and storytelling these professional performers bring magical moments to brave children in challenging times. We believe that every sick or disabled child should get the giggles that only a Giggle Doctor can bring.

Although a Giggle Doctor visit may only last a few minutes, the effect it can have on a child can last a lifetime. The visits often have a lasting impact on the children and families they visit. We often receive feedback from parents telling us the visits have helped to improve a child’s long-term experience of hospital.

“Because of the Giggle Doctors my daughter actually looks forward to coming to hospital for chemotherapy”

“The Giggle Doctor visits have helped my son to interact as he has developed a phobia of medical staff and the Giggle Doctors have helped him be less scared of people on the ward”

A Giggle Doctor visit isn’t just about one magical moment, it can have a significant and lasting impact on a child’s well-being and perception of the hospital environment. 

Happy coding!

Greetings

El Bruno

#Office – Another cool #Outlook Feature: review links and email participants before send an email

Buy Me A Coffee

Hi!

I already write about some cool features embedded in Office to help us to be more productive. Some of them are AI based, and others are so simple and useful, like this one, that they deserve a post.

This is a classic one: we are trying to avoid big attachment in emails, if you are still attaching files to your emails, please STOP. Instead of attaching files, we store these files in OneDrive or Sharepoint, and we share the link.

One extra step here, is validate the permissions for the file or folder. You don’t want to share a file, and forget to grant permissions to the desired audience.

That’s why, now Outlook will check the links that you embed in your email body and the recipients of your email and show the following message if some recipients don’t have access to links in the message.

Super cool !

Happy coding!

Greetings

El Bruno