I’ll be sharing some experiences and insights around Machine Learning, Computer Vision and IoT. Here are my session details.
How a PoC at home can scale to Enterprise Level using Custom Vision APIs (v2!)
It all started with a DIY project to use Computer Vision for security cameras at home. A custom Machine Learning model is the core component used to analyze pictures to detect people, animals and more in a house environment. The AI processing is performed at the edge, in dedicated hardware and the collected information is stored in the cloud.
The same idea can be applied to several CCTV scenarios, like parking lots, train stations, malls and more. However, moving this into enterprise scale brings a set of challenges, which are going to be described and explained in this session.
In this new version of the session, we will start from scratch and create a complete “Parking Garage Open Space Tracker” solution with live devices and live cars (small ones, of course)
I’ve posted this one some time ago, however it’s still a free and VERY USEFUL one !
Christopher Bishop, Technical Fellow and Laboratory Director In Microsoft Research Cambridge, UK, gives us the chance to download for free his eBook about Pattern Recognition and Machine Learning. With more than 700 pages of a highly recommended reading
This leading textbook provides a comprehensive introduction to the fields of pattern recognition and machine learning. It is aimed at advanced undergraduates or first-year PhD students, as well as researchers and practitioners. No previous knowledge of pattern recognition or machine learning concepts is assumed. This is the first machine learning textbook to include a comprehensive coverage of recent developments such as probabilistic graphical models and deterministic inference methods, and to emphasize a modern Bayesian perspective. It is suitable for courses on machine learning, statistics, computer science, signal processing, computer vision, data mining, and bioinformatics. This hard cover book has 738 pages in full colour, and there are 431 graded exercises (with solutions available below). Extensive support is provided for course instructors.
It was a placer to share some amazing time with the Mississauga .Net User Group last night in my last session of the decade. It was a full night focused on Artificial Intelligence and Machine Learning, and as usual is time to share the resources used in the session.
I’m getting ready for my last event of the year, and I just realize that in the latest update of Model Builder, we have the chance to build our own Image Classifier scenario. Let’s start with the official Model Builder definition (see references):
ML.NET Model Builder provides an easy to understand visual interface to build, train, and deploy custom machine learning models. Prior machine learning expertise is not required. Model Builder supports AutoML, which automatically explores different machine learning algorithms and settings to help you find the one that best suits your scenario.
Working with images was supported for a while in Machine Learning.Net. In the Machine Learning .Net Samples, we have sample scenarios like
Image Classification Model Training – Preferred API (Based on native TensorFlow transfer learning)
In this sample app you create your own custom image classifier model by natively training a TensorFlow model from ML.NET API with your own images.
We even have an amazing tutorial, to create our own image classification model from zero
Tutorial: Generate an ML.NET image classification model from a pre-trained TensorFlow model
Learn how to transfer the knowledge from an existing TensorFlow model into a new ML.NET image classification model. The TensorFlow model was trained to classify images into a thousand categories. The ML.NET model makes use of part of the TensorFlow model in its pipeline to train a model to classify images into 3 categories.
Training an Image Classification model from scratch requires setting millions of parameters, a ton of labeled training data and a vast amount of compute resources (hundreds of GPU hours). While not as effective as training a custom model from scratch, transfer learning allows you to shortcut this process by working with thousands of images vs. millions of labeled images and build a customized model fairly quickly (within an hour on a machine without a GPU). This tutorial scales that process down even further, using only a dozen training images.
And now, I found that Model Builder, also supports an Image Classification Scenario.
It follows the Model Builder standard workflow, starting with the selection of the scenario:
And then selecting a folder with the Images.
Important: Model Builder expects image data to be JPG or PNG files organized in folders that correspond to the categories of the classification.
To load images into Model Builder, provide the path to a single top-level directory:
This top-level directory contains one subfolder for each of the categories to predict.
Each subfolder contains the image files belonging to its category.
Once the folder is selected, we can see a preview of the images and labels loaded from the folder.
For more information about how to organize images for this scenario, refer to Load training data into Model Builder.
And now we start the training process. This may take a while, depending on your hardware. I’m using the sample set of drawings that we used on the InsiderDev Tour, for Custom Vision. These are 24 drawings images, with 3 labels, and in a PC with a I7, 32GB of Ram and an SSD, the training process took a little longer than 2 minutes.
Once the training is complete, we have a decent accuracy in our model, so it’s time to test. Before Model Builder last step, we have the chance to test the model with some test images.
Using one of the images that I created at Ignite in Orlando, the trained model get’s a human with a 99% of accuracy.
And, the final step is to add the generated model and code to our project. I’ll write about how to use this generated code on the near future.
Time for a
very interesting feature part of the Azure family: Azure Open Datasets. OK,
when you read the name, you probably get 95% of the idea, however, let’s dig
into the official definition (see references).
Azure Open Datasets are curated public datasets that you can use to add scenario-specific features to machine learning solutions for more accurate models. Open Datasets are in the cloud on Microsoft Azure and are integrated into Azure Machine Learning and readily available to Azure Databricks and Machine Learning Studio (classic). You can also access the datasets through APIs and use them in other products, such as Power BI and Azure Data Factory.
Datasets include public-domain data for weather, census, holidays, public safety, and location that help you train machine learning models and enrich predictive solutions. You can also share your public datasets on Azure Open Datasets.
is amazing, however this feature became useful when you start to work with the
new amazing Azure Machine Learning Studio (Preview). Now in the [Assets /
Datasets] section we can use:
Datasets from local files
Datasets from DataStore
Datasets from WebFiles
Datasets from the Open DataSet repository
last one is awesome because we can work with sample and free data like
Microsoft opened a brand new Microsoft Reactor in Toronto, and I’m lucky enough to host a AI session about Anomaly Detection. Below are the details
Detecting anomalies is a common scenario which can be applied to dozens of industries. From the analysis of power consumption, medical data, or even analysis of personal information, anomalies can be detected based on historical data.
During this workshop, Bruno will guide attendees to code a complete system that will detect anomalies: you will train a model based on historical data, and later use the same model with new data to identify anomalies. At the end of the workshop, attendees will review a new set of options to create an Anomaly Detection System without a single line of code!
Please bring a laptop or other personal device to participate in this hands-on workshop.