“Artificial Intelligence and Machine Learning are the new buzzwords in the industry. Microsoft’s vision is to make AI accessible to every enterprise, data scientist, developer, information worker, consumer and device everywhere in the world. AI has a big role to play in the enterprise space. The field of AI is progressing at a rapid pace. Without understanding the concepts behind these advanced technologies, developers and administrators will struggle to evaluate the potential impact of new tools and solutions. In this session, we will break down the concepts behind existing technologies, outline various tools available today, and discuss the direction of AI and ML for Developers. We will cover how developers, Power Users, and Information workers can take advantage of Microsoft’s AI and Cognitive Services offerings to build real-life enterprise solutions.
You will learn: 1) Overview of Microsoft AI Platform 2) What are the cognitive services? 3) What tools are available today? 4) How to use Cognitive Services to implement real-life business solutions?
Understanding Autonomous systems, Machine Teaching and Bonsai Platform
Innovations in AI are creating the next wave of disruption in industrial technology. Autonomous machines are more than an expansion of automated systems: They are an entirely new way to amplify human expertise. In this session you will take a look to how to speed the creation of AI-powered automation to improve production efficiency and reduce downtime – without requiring data scientists with Project Bonsai. We will also take a look at AirSim, a simulator for drones, cars and more, built on Unreal Engine. AirSim a platform for AI research to experiment with deep learning, computer vision and reinforcement learning algorithms for autonomous vehicles.
Knowledge mining is an emerging discipline in artificial intelligence (AI) that uses a combination of intelligent services to quickly learn from vast amounts of information. It allows organizations to deeply understand and easily explore information, uncover hidden insights, and find relationships and patterns at scale. This platform allows to use machine learning model to address any specific business need. The life cycle of the models can be managed using MLOps. MLOps, or DevOps for machine learning, enables data science and IT teams to collaborate and increase the pace of model development and deployment via monitoring, validation, and governance of machine learning models. These two solutions combined can create a strong platform to extract, enrich, and expose any hidden data within any organization. In this session you are going to learn about three main steps in KM platform:
Amazing session yesterday with my friends from ConoSurTech, Fernando, Ivana y Pablo. We had a great session with a high level overview around Azure Machine Learning, as part of the “Diplomado de AI Fundamentals 2020“
Diplomado de AI Fundamentals 2020
El diplomado consiste en clases online dictadas a partir del 20 de Octubre de 2020. Las clases son dictadas los días Martes y Jueves a partir de las 18hrs (GMT-3), junto a entrevistas con líderes de productos y comunidades. El diplomado recorrerá los temas que forman parte de la certificación AI-900 Artificial Intelligence Fundamentals sumado a los últimos lanzamientos realizados por Microsoft en el mes de Septiembre.
So, as usual now is the time to share slides, and links. No Code this time, so you are going to avoid some Copy & Paste.
It was a fast session, with over +100 viewers, and … so many topics to talk ! I prepared a full MLOps pipeline to show, I was planning to code from scratch a Jupyter Notebook, and more … we didn’t have time. This comment resume it all:
Another huge and amazing events from my friends at the @netcoreconf, perfect excuse to talk about python, machine learning, computer vision and more. This one was also tricky, no slides, just code during 50 min so here are some related resources.
Yesterday I wrote about the new options that we have to train models in ML.Net Model Builder. The main new is that we have the option to use our GPU to train models.
Quick recap, Model Builder supports 3 specific training environments
Yesterday I tested train a small image recognition model using CPU and GPU and the training time were very similar. The image training set was small, and I also haven’t configured my GPU & CUDA environment, so I decided to raise my own bet and test with something a little more challenging.
For this new test, I’ll use a set of images from a Kaggle challenge [State Farm Distracted Driver Detection] (see references). This is a 1GB image set, 22424images, with ten categories.
Of course, I used Model Builder to train an Image Classifier scenario, here is a preview of the configuration:
This training scenario was much more resource heavy than yesterday easy test. The total time was 39.2 minutes. Here is more detail
GPUs are great for deep learning because the type of calculations they were designed to process are the same as those encountered in deep learning. Images, videos, and other graphics are represented as matrices so that when you perform any operation, such as a zoom-in effect or a camera rotation, all you are doing is applying some mathematical transformation to a matrix.
Even if you have a non-powerful GPU (like me!), you may want to give it a try when you are training a model. ML.Net Model Builder documentation includes a series of steps to configure a CUDA 10.0 environment, which is good enough for most NVIDIA Graphic Cards.
Machine Learning.Net (ML.Net) includes a visual / step-by-step option for Auto ML: Model Builder. Even if ML.Net is based on .Net Core, and we can use it on Windows, Linux or Mac; Model Builder is only available for Visual Studio 2019.
And, in the latest preview version, Model Builder supports 3 specific training environments
And as you can imagine, if you have a decent GPU, the 2nd option is a must. So, let’s have a review for this.
First step is to install the [ML.NET Model Builder GPU Support (Preview)] extension (see references).
This will take a couple of minutes, and it will add the GPU option to our Model Builder projects. Once we have the extension installed, when we create a new ML.Net Model Builder Scenario, we can choose for CPU, GPU, or Azure environment.
In each one, we can see the different options available.
For this test, I use a sample Image Recognition scenario with 24 images for 3 different labels: fish, flower and human. This is a very simple scenario and the RestNet will easily work on this.
So, the total time to train a Image Recognition model with my CPU is:
During the past days, you may see this images about how a new ML model can start with a pixelated image of a face, and .. let me share the official project description:
Given a low-resolution input image, Face Depixelizer searches the outputs of a generative model (here, StyleGAN) for high-resolution images that are perceptually realistic and downscale correctly
GitHub, Face-Depixelizer (see references)
Great Idea, sadly, one of the first tested images show this
You probably guess the source image, and you can see how wrong is the guess. However, it’s not just a mistake, after a quick search we can find some other bad samples of the tool.
And we can even find some scary face generation from video game characters (see references)
Why this is wrong ?
Just taking a look at the generated faces, will give you a sense of what’s wrong here.
There is a trend which basically denied an error here. Researchers in deep generative modeling are obsessed with generating photo-realistic images from more abstract/low-information representations (down-sampled, cartoons, sketches, etc.). The technology behind this is amazing, however in this case, is not just “lack of data”, or a very poor trained ML model. The Model uses the popular FFHQ faces dataset, which seems to have a very diverse group of faces.
And here goes my question: how far did the author tested this before publishing? I’m guessing that if you just share this with a couple of friends (ML enthusiasts), someone will point all these errors back to you. Unless, your test circle is so poorly diverse, that you didn’t get to this point.
So, I’ll assume the best from the author, but I’ll also realize how these practices defines a specific type of bias in ML, or in software development in general.
These days, I learned a lot about history, empathy and, and the end I think we all need to do our best to be better humans.
In the following video, you will find an amazing story and samples about bias in Machine Learning.
Bonus: if you wonder how this works with Asian group? Let’s share a Lucy Lu sample
Hoy tenemos la suerte de hablar con Miguel Egea (@miguelEgea) y Juan Carlos Quijano (@jc_quijano) sobre Machine Learning, y datos, muchos datos. Miguel es uno de los principales referentes en lo que a datos se refiere, así que tener la oportunidad de hablar con el de varios temas, es todo un privilegio!
Miguel Egea es Technical Advisor en Solid Quality Mentors (LinkedIn)
Juan Carlos Quijano Abad es Microsoft Certified Trainer, Arquitecto de Soluciones en Azure, Consultor independiente en implantación de DevOps (LinkedIn)
Bruno Capuano es Canada Innovation Lead at Avanade and Microsoft AI MVP (LinkedIn)
Seguimos en modo StayAtHome, y una forma excelente de conectar con las comunidades, es participando en eventos ya sea como Speaker o como Attendee.
Esta vez tengo la oportunidad de hablar en NetCoreConf:
Lo último en tecnologías Microsoft y mucho más con los mejores expertos. Donde podrás aprender, compartir y hacer networking. Asistiendo a diversas Conferencias y Workshops. Hablaremos sobre NetCore, Azure, Xamarin, IA, Big Data. ¿A que estas esperando?
NetCoreConf 2020 realizará el primer evento virtual a nivel global dedicado exclusivamente al sector del desarrollo y consultoría que busca descubrir y dar a conocer las nuevas tecnologías de vanguardia y crear vínculos estratégicos que generen sinergias conjuntas entre los profesionales del sector, empresas e instituciones.
La agenda es impresionante, y yo hablaré de uno de los productos más interesantes que Microsoft ha presentado en los últimos años: Machine Learning.Net. En mi sesión comentaré un poco la historia y algunos ejemplos del producto, y además un poco de una herramienta muy interesante para los no programadores: AutoML.
Finalmente, agradecer al gran equipo que esta detrás de este evento: