#Humor – How is your #AI algorithm working? Fine …

Hi!

Facebook detected cruel or insensitive content

Source: https://www.facebook.com/permalink.php?story_fbid=769667186803358&id=380869852349762

Advertisements

#Event – Global AI Nights on Sept 5th!

Hi !

During August, I’ll be participating and supporting a couple of hackathons and work events (check my next events section!).

And I’m happy to share that on September 5th I’ll part of the Global AI Night in Toronto.

The Global AI Night is a free evening event organized by 88 communities all over the world that are passionate about Artificial Intelligence on the Microsoft Azure. During this AI Night you will get inspired through sessions and get your hands dirty during the workshops. By the end of the night you will be able to infuse AI into your applications.

The Toronto registration and information is available here https://global.ainights.com/bootcamp/8d354913-4243-4a9e-8c1d-a594dc7dbe69

As usual, the best way to explain this is with a video

Happy coding!

Greetings @ Rogers Cup

El Bruno

#AI – Introduction to #deeplearning vs. #machinelearning by @frlazzeri. The best 10 min read for today

Hi!

Explain the differences / relationship between Machine Learning and Deep Learning is a question that I face in every event or chat about Machine Learning.

And I used to have my 5 bullets explanation for this. However, now thanks to Francesca Lazzeri (@frlazzeri) I can advice people to read this amazing article.

Introduction to deep learning vs. machine learning

Introduction to deep learning vs. machine learning

So, you know, if you have 10 minutes, this will really help you understand the relationships between AI, ML and DL!

Happy Coding!

Greetings @ Toronto

El Bruno

#Office – Live subtitles in Microsoft Teams, oh yeah! Another great #AI live sample.

Hi !

I usually use the live subtitle demo feature in PowerPoint to showcase how amazing is the current state of AI, and how we can use it in our daily lives. And now, after the official announcement of Microsoft, we can also use the live subtitle feature in Microsoft Teams.

As you can expect the way to use it is very easy, just enable the Live Subtitles feature and Microsoft Teams will automatically start to

  • Listen to every audio conversation
  • Convert the audio to text
  • Present the live as a subtitle in the MS Teams windows

In the official announcement there is a nice animation on this

We may expect to have also maybe some extra features like language translations and more. That will be also so cool!

Happy coding!

Greetings @ Toronto

El Bruno

References

My posts

#Office – Acronyms pane in Word, another amazing example of #AI embedded in our day to day tools – Powered by Microsoft Graph!

Hi!

Today’s post is, one more time, related to some amazing Artificial Intelligence features embedded in Microsoft Office. And this is very helpful if you work in an organization with tons of Acronyms. I’m sure, you have your own set of acronyms at different levels: Team, Group and Organization.

When you are new to this Acronyms, is very hard to get up to date with all of them. That’s why the Acronyms feature in Word is very important, it may help us and save us lot of time!

The Acronyms page is the [References] tab in the Ribbon, or you can just search for it.

Search for Acronyms Pane in Word

Once, you enabled the pane, it will analyze the text of your Word document and also analyze the definitions mostly used on your organization to get a sense of “what can be an Acronym“. It will leverage the Microsoft Graph to surface definitions of terms that have been previously defined across emails and documents.

The results are amazing:

Word Acronyms page results

Another amazing example of AI in our day to day use.

Happy coding!

Greetings @ Burlington

El Bruno

My posts

#AI – MineRL, play #Minecraft to benefit science!

Hi !

I’ve write a couple of time about project Malmo and Minecraft, so if you like Minecraft and Artificial Intelligence, MineRL will make your day. Let’s start with some basis:

MineRL is a large-scale dataset on Minecraft of seven different tasks, which highlight a variety of research challenges including open-world multi-agent interactions, long-term planning, vision, control, navigation, and explicit and implicit subtask hierarchies.

There are 2 main ways to be involved with MineRL, entering the AI (DL) competition, or playing Minecraft (to create more source data to train and test models!)

In the play more, MineRL want to solve Minecraft using state-of-the-art Machine Learning! To do so, MineRL is creating one of the largest datasets of recorded human player data. The dataset includes a set of tasks which highlights many of the hardest problems in modern-day Reinforcement Learning: sparse rewards and hierarchical policies.

There is plenty of information and details on the main website, and as soon as I finish some of my current work and personal projects, I’ll for sure spend more time here!

More information http://minerl.io/about/

Happy coding!

Greetings @ Toronto

El Bruno

#Event – See you @ChicagoCodeCamp on May 11, 2019 for some Deep Learning and Custom Vision experiences

Hi !

I’m very lucky to be at the next Chicago CodeCamp with another session around Custom Vision:

How a PoC at home can scale to Enterprise Level using Custom Vision APIs

It all started with a DIY project to use Computer Vision for security cameras at home. A custom Machine Learning model is the core component used to analyze pictures to detect people, animals and more in a house environment. The AI processing is performed at the edge, in dedicated hardware and the collected information is stored in the cloud. The same idea can be applied to several CCTV scenarios, like parking lots, train stations, malls and more. However, moving this into enterprise scale brings a set of challenges, which are going to be described and explained in this session.

More Information: https://www.chicagocodecamp.com/ and remember that we will be also talking about Deep Learning.

Greetings @ Toronto

El Bruno

#Windows10 – Windows #VisionSkills sample UWP App

Hi!

Yesterday the Windows Team announced the preview version of Windows Vision Skills. So today I was browsing the samples in Github and I’ve created a simplified version of the Skeleton tracker using a live feed from a webcam.

Here are some notes about my GitHub sample

  • The UWP App must be Windows 10 version 1809
  • I added the NuGet packages [Microsoft.AI.Skills.Vision.SkeletalDetectorPreview] and [Microsoft.Toolkit.Uwp.UI.Controls]
  • The MainView uses the CameraPreview control from the [Microsoft.Toolkit.Uwp.UI.Controls] toolkit.
  • Each frame is processed and I use a SkeletalBinding to detect Skeletons / bodies
  • The core detection is performed here
        private async Task RunSkillAsync(VideoFrame frame, bool isStream)
        {
            m_evalPerfStopwatch.Restart();

            // Update input image and run the skill against it
            await m_skeletalDetectorBinding.SetInputImageAsync(frame);
            await m_skeletalDetectorSkill.EvaluateAsync(m_skeletalDetectorBinding);

            m_evalPerfStopwatch.Stop();
            m_skeletalDetectionRunTime = m_evalPerfStopwatch.ElapsedMilliseconds;

            await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
            {
                m_bodyRenderer.Update(m_skeletalDetectorBinding.Bodies, !isStream);
                m_bodyRenderer.IsVisible = true;
                UISkillOutputDetails.Text = $"Found {m_skeletalDetectorBinding.Bodies.Count} bodies (took {m_skeletalDetectionRunTime} ms)";
            });
        }
  • There is also a BodyRenderer.cs class used to draw the skeletons on top of the CameraPreview Image control. It draws lines in an empty canvas.

You can download the sample code from here https://github.com/elbruno/Blog/tree/master/20190501%20VisionSkills%20Skeleton%20Sample

Greetings @ Burlington

El Bruno

References

#Windows10 – Windows Vision Skills (Preview), an amazing set of AI APIs to run in the edge!

Hi!

Today’s announcement is a big one if you are interested on move AI capabilities to the Edge. The Windows team make public the preview of Windows Vision Skills framework:

Windows Vision Skills framework is meant to standardize the way AI and CV is put to use within a WinRT application running on the edge. It aims to abstract away the complexity of AI and CV techniques by simply defining the concept of skills which are modular pieces of code that process input and produce output. The implementation that contains the complex details is encapsulated by an extensible WinRT API that inherits the base class present in this namespace, which leverages built-in Windows primitives which in-turn eases interop with built-in acceleration frameworks or external 3rd party ones.

The official blog explain the basic features of the framework and describes a set of scenarios like Object Detector, Skeletal Detector, and Emotion Recognizer.

We have UWP Apps in the repo samples, and it only took 1 min to setup everything to get the App up and running. In the following image, it smoothly detects a person and a chair.

The next image is the sample for Skeletal detector (as a old Kinect dev, this really makes me happy!)

This is an big announcement, because all of this APIs are native , and that means we can easily use them in

Greetings @ Toronto

El Bruno

References


#AI – AI for Earth, AI tools in the hands of those working to solve global environmental challenges

Hi !

When I was in Ohio @CodeMash, I was lucky enough to meet Jennifer Marsman, Principal Engineer & speaker on the AI for Earth team at Microsoft (@jennifermarsman). She hosted an amazing session where she shared details about some projects on AI for Earth.

AI for Earth puts Microsoft cloud and AI tools in the hands of those working to solve global environmental challenges

See references

The work that the AI for Earth teams are doing are amazing, and I was really impressed by the “Mexican whale story”. The team uses image analysis to identify individual animals in regular persons photos or videos, and using meta data like date and location of a photo or a video, they can generate paths of animal migration. And yes, the photos came from public social media spaces like Facebook, Instagram or YouTube.

So, I got this information as a draft for a while, and now I get some more details and it makes sense to share it. The project name is Wild Me:

Wild Me is using computer vision and deep learning algorithms to power Wildbook, a platform that can identify individual animals within a species.  They also augment their data with an intelligent agent that can mine social media. 

And as usual, a video is the best way to explain this:

Besides Wild Me, there are other amazing projects like SilviaTerra or FarmBeats. You can find the complete list of projects and challenges here (link).

Happy Coding!

Greetings @ Burlington

El Bruno

References