#Office – New Microsoft Word editor helps support Inclusive Language.

Buy Me A Coffee

Hi!

I already write about some cool features embedded in Office to help us to be more productive. These features are mostly based on Artificial Intelligence. And one great example is the Microsoft Word Editor feature.

Today I was reading an article “New AI tools help writers be more clear, concise and inclusive in Office and across the web” (see references) , and I realize that the Microsoft Word Editor focus not only on productivity, it also help us to fight bias and to be more inclusive while we are writing.

Important: this feature is not enabled by default. In order to enable this, we must follow these steps “

  • Open Word Options
  • Select Proofing section
  • Go to [When correcting spelling …. / Writing Style] option
  • Open Settings
  • Enable the [Inclusiveness] options
enable Inclusiveness options in Microsoft Word

Once we enable these features, the editor will start to analyze for age bias, cultural bias, and more.

I’ve tested this, and it’s nice to get suggestions based on gender bias, like use firefighter instead of fireman.

suggestion change fireman by firefighter

It also suggest other options if we are using “whitelist”

suggestion, do not use whitelist

In the references sections, I shared 2 amazing articles where Microsoft explains the details about this feature.

And, in the personal side, this is also a great tool for non-english speakers like myself, to learn Inclusiveness and also to do a better work.

Super cool !

Happy coding!

Greetings

El Bruno

References

#event – #GlobalAI On Tour Toronto, on Saturday June 27, let’s rock a full morning about AI

Buy Me A Coffee

Hi !

The Global AI On Tour is a free event organized across the world in April, May and June by local communities that are passionate about artificial intelligence on Microsoft Azure.

We will run this in Virtual mode, so for our local one, the one for the Great Toronto Area, we are going to have speakers from all over the world !!!

Registration: https://www.meetup.com/metrotorontoug/events/267885516/

Agenda

https://sessionize.com/api/v2/tti45lz7/view/GridSmart?preview=True

9:30 – 9:45
Introduction and Opening

9:45 – 10:35
Computer, make it so! by Veronika Kolesnikova

10:40 – 11:30
Make your Mixed Reality App include AI with Cognitive Services by Ivana Tilca

11:35 – 12:25
AI and Cognitive Services in Power BI by Ashraf Ghonaim

Happy coding!

Greetings

El Bruno

#Event – Resources used during the #AI session on the Best of Build 2020 – Canada Community Edition event

Buy Me A Coffee

Hi !

After an amazing event last Saturday, where we shared our best choices after Microsoft Build, now it’s time to share the resources used in the Artificial Intellitence session.

Slides

Resources

Virtual Stage

AI KeyNote

AI Super Computer

Cognitive Services

Project Bonsai

Happy coding!

Greetings

El Bruno

#Office – Another cool #Outlook Feature: review links and email participants before send an email

Buy Me A Coffee

Hi!

I already write about some cool features embedded in Office to help us to be more productive. Some of them are AI based, and others are so simple and useful, like this one, that they deserve a post.

This is a classic one: we are trying to avoid big attachment in emails, if you are still attaching files to your emails, please STOP. Instead of attaching files, we store these files in OneDrive or Sharepoint, and we share the link.

One extra step here, is validate the permissions for the file or folder. You don’t want to share a file, and forget to grant permissions to the desired audience.

That’s why, now Outlook will check the links that you embed in your email body and the recipients of your email and show the following message if some recipients don’t have access to links in the message.

Super cool !

Happy coding!

Greetings

El Bruno

#Event – Resources used during the #GlobalAI Tour, Buenos Aires, May 30. Let’s code a drone to follow faces! Using AI, Python, containers and more.

Buy Me A Coffee
Globa AI Community on Virtual Tour Logo

Hi !

The event is complete, and now is time to share the resources I used during the session

Slides

Code

https://github.com/elbruno/events/tree/master/2020%2005%2030%20Global%20AI%20Tour%20BsAs%20Drone%20AI

Resources

Happy coding!

Greetings

El Bruno

#Event – #GlobalAI On Tour, Argentina. Vamos a programar a un dron en Spanish !

Buy Me A Coffee
Globa AI Community on Virtual Tour Logo

Buenas !

Cambios de ultimo momento. Mañana estaré como uno de los ponentes virtuales en el Global AI on Tour para Argentina. Estaré hablando de drones, y más que hablando más bien programando un poco un drone para pasarlo bien. Y utilizando un poco de AI para hacer esto más divertido.

Para subir el nivel, en la agenda pueden ver que los demás speakers van a tocar temas mucho más serios e interesantes.

Global AI On Tour Argentina

Happy coding!

Greetings

El Bruno

#Coding4Fun – How to control your #drone with 20 lines of code! (17/N)

Buy Me A Coffee

Hi !

Once we have the a custom vision trained model instance, we can use it to recognize objects from the drone camera feed. Read my previous posts for descriptions on these.

Another interesting scenario, is to save local files for every detected object. In the following code, I’ll save 2 different files for every detected object

  • A camera frame image, with a frame around the detected object
  • A plain text file with the JSON information

In the sample code below, the save process is in the lines 122-129. And, not in a fancy way, the files have the same name to correlate them.

drone recognized files

So let’s go to the full code:

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (16/N)

Buy Me A Coffee

Hi !

In my previous post, I shared an example where I analyzed the camera feed using a Image Recognition model created using Custom Vision. Today I’ll expand the sample, and show in real time the detected MVPs logos with a frame in the drone camera feed.

Let’s take a look at the demo working in the following image.

drone camera image analysis using custom vision and drawing frames for detected objects

In the top of the image, we can see the app console log, with the information received for each analyzed frame. When an image is detected, we can see the tag, the probability and the bounding box coordinates.

A sample JSON return string start like this one:

{
  "created": "2020-04-08T17:22:02.179359",
  "id": "",
  "iteration": "",
  "predictions": [
    {
      "boundingBox": {
        "height": 0.1979116,
        "left": 0.3235259,
        "top": 0.05847502,
        "width": 0.20438321
      },
      "probability": 0.89171505,
      "tagId": 0,
      "tagName": "MVP"
    },
    {
      "boundingBox": {
        "height": 0.2091526,
        "left": 0.65271178,
        "top": 0.0433814,
        "width": 0.17669522
      },
      "probability": 0.70330358,
      "tagId": 0,
      "tagName": "MVP"
    },

In order to position the frames in the correct location, I need to make some math using the current camera and image size and the returned bounding box values for, height, left, top and width. Lines 87-110.

resize_factor = 100

height = int(bb['height'] * resize_factor)
left = int(bb['left'] * resize_factor)
top = int(bb['top'] * resize_factor)
width = int(bb['width'] * resize_factor)

# adjust to size
camera_Width, 
height = int(height * camera_Heigth / 100)
left = int(left * camera_Width / 100)
top = int(top * camera_Heigth / 100)
width = int(width * camera_Width / 100)

# draw bounding boxes
start_point = (top, left)                 
end_point = (top + height, left + width) 
color = (255, 0, 0) 
thickness = 2                
cv2.rectangle(img, start_point, end_point, color, thickness)            

So let’s go to the full code:

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Coding4Fun – How to control your #drone with 20 lines of code! (15/N)

Buy Me A Coffee

Hi !

Let’s use Custom Vision to analyze the images from our drone camera. In this scenario, I created a custom model to recognize MVP awards from my MVP wall. I know, that’s bragging, but I like it.

Disclaimer: There are plenty of documentation and tutorials about Custom Vision. I won’t go deep on the steps about how to create a model. See references.

For my next scenario, I would assume that

  • You have created a model in Custom Vision
  • You have published the Custom Vision model, and have a HTTP endpoint
  • Or the model is exported as a docker image, and it’s running in a docker container. And we have a HTTP endpoint.

The code is similar to the one we used before. OpenCV to hookup the camera, commands to take off and land. Let me remark a couple of important lines in this code:

  • There are a couple of new references, mostly used for the process of the JSON response from the Custom Vision model.
  • Lines 146-155. Get the frame from the drone camera and save a local file. Apply a specific format to the file name, for demo purposes.
  • Lines 157-163. Make a HTTP POST call to analyze the saved file. Convert the result to a JSON object (room for improvement here), and analyze the JSON response.
  • Lines 70-85. Analyzed the JSON response from the Custom Vision model. Sort the results by probability and filter the results using a threshold (75). Return a string with the detected object.
  • Lines 165-178. Calculate and display FPS and detected objects.

A sample JSON return string start like this one:

{
  "created": "2020-04-08T17:22:02.179359",
  "id": "",
  "iteration": "",
  "predictions": [
    {
      "boundingBox": {
        "height": 0.1979116,
        "left": 0.3235259,
        "top": 0.05847502,
        "width": 0.20438321
      },
      "probability": 0.89171505,
      "tagId": 0,
      "tagName": "MVP"
    },
    {
      "boundingBox": {
        "height": 0.2091526,
        "left": 0.65271178,
        "top": 0.0433814,
        "width": 0.17669522
      },
      "probability": 0.70330358,
      "tagId": 0,
      "tagName": "MVP"
    },

So let’s go to the full code:

And if you want to see this up and running, it’s much better to see this in a video (start at ):

The complete source code can be found here https://github.com/elbruno/events/tree/master/2020%2004%2018%20Global%20AI%20On%20Tour%20MTY%20Drone%20AI%20Mex

Happy coding!

Greetings

El Bruno

References

#Event – #GlobalAzure Virtual 2020, Panel de Expertos en AI ! live from Spain in Spanish :D

Buy Me A Coffee

Buenas !

Como parte del Global Azure Virtual, el próximo viernes 24 en tu hora local, tengo la suerte de participar en un panel de expertos de Inteligencia Artificial hablando sobre variedad de temas de AI. Tengo la suerte de compartir podio con:

Elena Salcedo

Software Developer @ Intelequia | MVP AI

Me divierto programando bots, trabajo montando Legos. Algún día mis bots montarán Legos.

Rodrigo Cabello

AI Team Lead at Plain Concepts and MVP in Artificial Intelligence

AI Team Lead and AI MVP at Plain Concepts (Madrid), with an experience of more than 9 years in the development of web applications. He is enthusiastic about new technologies and apps that use techniques based on Artificial Intelligence.

David Rodriguez

CTO at Intelequia Software Solutions, Microsoft Regional Director and Azure MVP

I’m David Rodriguez, a Spanish Microsoft Regional Director and Azure Microsoft Most Valuable Professional, living in Tenerife, Canary Islands. On 2011, I co-founded Intelequia Software Solutions, a company focused on helping customers to adopt the cloud model, as well as the TenerifeDev user group.

Y la sesión será una gran conversación

¿Conoces los principales beneficios del IoT y la Inteligencia Artificial? En esta sesión hablaremos de cómo hacer que nuestros dispositivos de IoT sean más inteligentes con la IA y cómo podemos lograrlo implementando diferentes tecnologías de Microsoft y proyectos de código abierto. Tendremos una charla honesta sobre los proyectos de vanguardia en los que estamos trabajando ahora mismo, incluyendo casos reales como la automatización del hogar y la industria, y cómo están siendo aceptados por las personas que interactúan con ellos.

Panel de Expertos de AI

Mas información de la sesión en https://virtual.globalazure.net/sessions/198313

Y el listado completo de sesiones del evento https://virtual.globalazure.net

Happy coding!

Greetings

El Bruno