#AI – CustomVision.ai project now can export directly toVision AI Developer Kit

Hi !

I was planning to write a couple of posts about Artificial Intelligence features in the Microsoft Suite, when I checked this feature available in CustomVision.ai.

Custom Vision export to Vision AI Dev Kit.jpg

Last year, Microsoft released a program named [Vision AI Developer Kit for IoT Solution Makers]

Integrated with Azure IoT Edge and working with the Microsoft Azure Machine Learning service (public preview), this Azure IoT Starter kit enables developers to build vision AI solution and run their AI models on the device.

vision ai dev kit camera.png

The device uses the Qualcomm vision intelligence platform for hardware acceleration of the AI model to deliver superior inferencing performance. And is specifically designed to deploy AI models built using Azure Machine Learning with Azure IoT Edge.

I just realize that you can also deploy to this camera, ONNX models from Azure AI Gallery, Azure ML models and of course, custom models created using CustomVision.ai. It’s all supported and managed using Azure IoT Edge.

So, now it’s time to check my delivery dates to see how much time I need to wait for my device to arrive and start to check the export option available in the CustomVision.ai portal!

Happy Coding!

Greetings @ Burlington

El Bruno

References

My posts on Custom Visopn

  1. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  2. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  3. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, drawing frames
  4. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, calculate FPS
  5. Can’t install Docker on Windows 10 Home, need Pro or Enterprise
  6. Running a Custom Vision project in a local Docker Container
  7. Analyzing images in a Console App using a Custom Vision project in a Docker Container
  8. Analyzing images using PostMan from a Custom Vision project hosted in a Docker Container
  9. Building the CustomVision.ai project in Docker in a RaspberryPi
  10. Container dies immediately upon successful start in a RaspberryPi. Of course, it’s all about TensorFlow dependencies
  11. About ports, IPs and more to access a container hosted in a Raspberry Pi
  12. Average response times using a CustomVision.ai docker container in a RaspberryPi and a PC

Windows 10 and YOLOV2 for Object Detection Series

 

Advertisements

#AI – Mis posts sobre CustomVision.ai, exportando y utilizando ONNX, Docker, en PC, RaspberryPi, MacOS y más !

Buenas !

Ahora que tengo una pausa entre eventos en Canada y USA, y ya he escrito varios posts al respecto, es el tiempo ideal para compilar y compartir los posts que he escrito sobre CustomVision.ai. Sobre como crear un proyecto de reconocimiento de objectos, como utilizar el mismo en modo web, invocando un HTTP Endpoint desde una app de consola. Y también desde aplicaciones en Windows 10 exportando el proyecto a formato ONNX y utilizando Windows ML. Finalmente, un par de post donde explico como utilizar CV.ai con docker en PC, Mac y Raspberry Pi.

  1. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  2. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  3. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, drawing frames
  4. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, calculate FPS
  5. Can’t install Docker on Windows 10 Home, need Pro or Enterprise
  6. Running a Custom Vision project in a local Docker Container
  7. Analyzing images in a Console App using a Custom Vision project in a Docker Container
  8. Analyzing images using PostMan from a Custom Vision project hosted in a Docker Container
  9. Building the CustomVision.ai project in Docker in a RaspberryPi
  10. Container dies immediately upon successful start in a RaspberryPi. Of course, it’s all about TensorFlow dependencies
  11. About ports, IPs and more to access a container hosted in a Raspberry Pi
  12. Average response times using a CustomVision.ai docker container in a RaspberryPi and a PC

Windows 10 and YOLOV2 for Object Detection Series

Greetings @ Burlington

El Bruno

#CustomVision – Es el momento de mover los proyectos de Custom Vision a #Azure!

Buenas !

Durante las ultimas semanas he escrito mucho sobre Custom Vision, ejemplos sobre como exportar modelos a formato ONNX o a imágenes para Docker; y luego utilizar estos modelos en apps de Consola, o en UWP Apps, inclusive con Docker en una Raspberry Pi. A este post lo tengo en borrador desde hace un tiempo, por lo que lo mejor sera que lo publique lo antes posible.

Si eres usuario de CustomVision.ai, seguramente has visto el siguiente mensaje cuando accedes al portal. El mismo nos avisa que el servicio dejara de estar disponible en modo preview / test el día 2019-03-19. Esto implica que si quieres seguir utilizando CV, debes mover tus proyectos a Azure.

Custom Vision moved to Azure

Una opción puede ser crear nuevamente los proyectos de CV, cargar las imágenes y hacer todo el proceso de etiquetado y entrenamiento desde cero. Esa opción es valida. Sin embargo, los nuevos proyectos tendrán nuevos IDs y también nuevas URLs para acceder a los HTTP EndPoints de los mismos. La otra opción es [mover a Azure] los proyectos de CV.

Lo primero que debemos hacer es crear un Custom Vision resource en una suscripción de Azure. Si conoces Azure estos son 2 clics, y muy fáciles.

azure custom vision resource

Podemos seguir utilizando un plan Free, con los siguientes parámetros:

  • Up to 2 projects
  • Limit of 5000 training images
  • 2 transactions per seconds
  • Limit of 10000 predictions per month

Custom Vision Azure Prices

Una vez creado el resource en Azure, debemos volver al portal de CustomVision.ai, seleccionar el proyecto que queremos migrar y en la sección Settings veremos una opcion [Move to Azure] en la esquina izquierda inferior.

Custom Vision move to Azure button

Teniendo en cuenta que solo podemos usar Proyectos de CV en una única región, por ahora, tendremos que completar los datos para mover el proyecto. Y listo! El proyecto de CV esta migrado a Azure 😀

Custom Vision move to Azure only in South Central

Happy Coding!

Greetings @ Toronto

El Bruno

Resources

  1. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  2. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  3. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, drawing frames
  4. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, calculate FPS
  5. Can’t install Docker on Windows 10 Home, need Pro or Enterprise
  6. Running a Custom Vision project in a local Docker Container
  7. Analyzing images in a Console App using a Custom Vision project in a Docker Container
  8. Analyzing images using PostMan from a Custom Vision project hosted in a Docker Container
  9. Building the CustomVision.ai project in Docker in a RaspberryPi
  10. Container dies immediately upon successful start in a RaspberryPi. Of course, it’s all about TensorFlow dependencies
  11. About ports, IPs and more to access a container hosted in a Raspberry Pi
  12. Average response times using a CustomVision.ai docker container in a RaspberryPi and a PC

Windows 10 and YOLOV2 for Object Detection Series

 

#CustomVision – It’s time to move your Custom Vision projects to #Azure!

Hi !

I’ve been writing a lot about Custom Vision, and how use and export CV models to ONNX or docker images to be used later in different types of scenarios. I got this post in draft mode, so it’s time to publish it.

If you are using CustomVision.ai, you probably notice the warning message about the service being moved from a preview / test stage on 2019-03-19. That’s mean that you need to move your CV projects to a valid Azure account if you want to use them.

Custom Vision moved to Azure

You may want to create and train again some cv projects, however you will get new project ids, new urls and you need to tag again all the images.

The 1st action here, is to create a Custom Vision resource in a valid Azure account. That’s a 2 click tutorial and it’s also very easy.

azure custom vision resource

There is also the option to continue working in a free mode scenario with the following parameters in the Free Instance:

  • Up to 2 projects
  • Limit of 5000 training images
  • 2 transactions per seconds
  • Limit of 10000 predictions per month

Custom Vision Azure Prices

Now we can go back to the Custom Vision.ai portal and select the project we want to migrate to Azure. In the Settings section, at the bottom left corner we have the [Move to Azure] option.

Custom Vision move to Azure button

Here we need to select the specific values of the resource we created before and that’s it! The Custom Vision project now is fully migrated to Azure 😀

Custom Vision move to Azure only in South Central

Happy Coding!

Greetings @ Toronto

El Bruno

Resources

  1. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  2. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  3. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, drawing frames
  4. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, calculate FPS
  5. Can’t install Docker on Windows 10 Home, need Pro or Enterprise
  6. Running a Custom Vision project in a local Docker Container
  7. Analyzing images in a Console App using a Custom Vision project in a Docker Container
  8. Analyzing images using PostMan from a Custom Vision project hosted in a Docker Container
  9. Building the CustomVision.ai project in Docker in a RaspberryPi
  10. Container dies immediately upon successful start in a RaspberryPi. Of course, it’s all about TensorFlow dependencies
  11. About ports, IPs and more to access a container hosted in a Raspberry Pi
  12. Average response times using a CustomVision.ai docker container in a RaspberryPi and a PC

Windows 10 and YOLOV2 for Object Detection Series

 

#AI – My posts on CustomVision.ai, running on ONNX, Docker, on PC, RaspberryPi, MacOS and more !

Hi !

After the events in Canada and USA, and several posts, I think it’s time to make a recap of the posts I’ve wrote about CustomVision.ai and how I created a custom object recognition project. And later used this as a web HTTP Endpoint in a Console application. And also in Windows 10 with ONNX using Windows ML; and finally running the Object Recognition project inside a Container in Docker on PC, Mac and Raspberry Pi.

  1. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  2. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  3. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, drawing frames
  4. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, calculate FPS
  5. Can’t install Docker on Windows 10 Home, need Pro or Enterprise
  6. Running a Custom Vision project in a local Docker Container
  7. Analyzing images in a Console App using a Custom Vision project in a Docker Container
  8. Analyzing images using PostMan from a Custom Vision project hosted in a Docker Container
  9. Building the CustomVision.ai project in Docker in a RaspberryPi
  10. Container dies immediately upon successful start in a RaspberryPi. Of course, it’s all about TensorFlow dependencies
  11. About ports, IPs and more to access a container hosted in a Raspberry Pi
  12. Average response times using a CustomVision.ai docker container in a RaspberryPi and a PC

Windows 10 and YOLOV2 for Object Detection Series

Greetings @ Burlington

El Bruno

#Docker – Tiempos de respuesta promedio utilizando #CustomVision.ai en un contenedor con Docker en #RaspberryPi u en PC

Buenas !

Alguien me pregunto por la performance de un proyecto de customvision.ai ejecutándose en una Raspberry Pi, y se me ocurrió que la mejor forma de explicarlo es mostrar las diferencias de tiempos de respuesta del mismo contenedor en Docker en PC y en una Raspberry Pi.

La PC donde haré la prueba tiene la siguiente configuración

w10 specs

Nota: Se que esto es bastante subjetivo, que para realizar una prueba real debería apuntar otros datos como el tipo de disco (SSD), apps en ejecución y más. La idea es tener un punto de referencia no una comparación completa.

El proceso de ejemplo para analizar 20 imágenes tarda unos 10.45 segundos en PC.

cv marvel docker local times

El mismo proceso en una RaspberryPi se ejecuta en 70.46 segundos.

cv marvel docker raspberry pi times times

Los tiempos promedio son

  • PC, 0.52 segundos
  • Raspberry Pi, 3.52 segundos

Y la conclusión es fácil: tener un device que permite analizar imágenes en 3.5 segundos por menos de $30 es impresionante!

Happy coding!

Saludos @ Toronto

El Bruno

References

My Posts

  1. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  2. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  3. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, drawing frames
  4. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, calculate FPS
  5. Can’t install Docker on Windows 10 Home, need Pro or Enterprise
  6. Running a Custom Vision project in a local Docker Container
  7. Analyzing images in a Console App using a Custom Vision project in a Docker Container
  8. Analyzing images using PostMan from a Custom Vision project hosted in a Docker Container
  9. Building the CustomVision.ai project in Docker in a RaspberryPi
  10. Container dies immediately upon successful start in a RaspberryPi. Of course, it’s all about TensorFlow dependencies
  11. About ports, IPs and more to access a container hosted in a Raspberry Pi

Windows 10 and YOLOV2 for Object Detection Series

#Docker – Average response times using a CustomVision.ai docker container in a #RaspberryPi and a PC

Hi !

I was testing the performance of the same customvision.ai exported project, running in a docker container in standard PC and a Raspberry Pi. And, I’m really surprised and happy about the RPI times.

Let’s start with the times for a container running in a PC with the following specs

w10 specs

Note: I know this is very subjective, because there is more information needed for a deep study. Like SSDs, Windows 10 version, apps running and more. This is just for reference.

A sample process to analyze 20 images tooks 10.45 seconds.

cv marvel docker local times
The same process using a container in a Raspberry Pi took 70.46 seconds.

cv marvel docker raspberry pi times times

The average time are

  • PC, 0.52 seconds
  • Raspberry Pi, 3.52 seconds

Again, amazing times for a 30 dollars device!

Happy coding!

Greetings @ Toronto

El Bruno

References

My Posts

  1. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  2. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  3. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, drawing frames
  4. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, calculate FPS
  5. Can’t install Docker on Windows 10 Home, need Pro or Enterprise
  6. Running a Custom Vision project in a local Docker Container
  7. Analyzing images in a Console App using a Custom Vision project in a Docker Container
  8. Analyzing images using PostMan from a Custom Vision project hosted in a Docker Container
  9. Building the CustomVision.ai project in Docker in a RaspberryPi
  10. Container dies immediately upon successful start in a RaspberryPi. Of course, it’s all about TensorFlow dependencies
  11. About ports, IPs and more to access a container hosted in a Raspberry Pi

Windows 10 and YOLOV2 for Object Detection Series

#Event – Resources used on my session at the largest Canada makeathon: @MakeUofT [How a PoC at home can scale to Enterprise Level using #CustomVision APIs]

2019 02 16 MakeUofT Custom Vision Bruno

Hi !

What an amazing time at the Canadian Largest Makeathon: MakeUofT (https://ieee.utoronto.ca/makeuoft/). The event, people and ideas are great. And now it’s time to share some of the materials used during my session

How a PoC at home can scale to Enterprise Level using Custom Vision APIs

It all started with a DIY project to use Computer Vision for security cameras at home. A custom Machine Learning model is the core component used to analyze pictures to detect people, animals and more in a house environment. The AI processing is performed at the edge, in dedicated hardware and the collected information is stored in the cloud.

The same idea can be applied to several CCTV scenarios, like parking lots, train stations, malls and more. However, moving this into enterprise scale brings a set of challenges, which are going to be described and explained in this session.

These are the slides I’ve used

And the source code is available here

https://github.com/elbruno/events/tree/master/2019%2002%2016%20MakeUofT%20Custom%20Vision

In the source code you can find the console and Windows 10 app samples I’ve coded live and also the exported images of my custom vision demo project in windows, linux and raspberry pi flavors. The 3rd one is where I spent some time updating the original linux one to work on the small device.

And as usual a couple of interesting links

Greetings @ Toronto

El Bruno

#Docker – Sobre puertos, IPs y mas para acceder a un container alojado en #RaspberryPi

Buenas !

Mi proyecto de CustomVision.ai esta compilado y ejecutándose en Docker en Raspberry Pi 3. Ahora llega el momento de utilizar el mismo desde aplicaciones en otros dispositivos, y para este caso, todos en la misma red.

Cuando ejecute mi imagen, utilice parámetros para definir la IP y los mapeos de los puertos de la misma. El siguiente comando es muy útil para ver esta información en un container.

sudo docker port <CONTAINER ID>

01 docker port

Mi container esta registrado en la dirección IP 127.0.0.1 y utiliza el puerto 80. Esto es genial para procesos locales, sin embargo no permite que este container sea accedido desde otros devices.

Lo ideal es no registrar la direccion IP local 127.0.0.1 y solo definir el mapeo de puertos 80:80. En este caso ejecuto mi imagen con el siguiente comando

sudo docker run -p 80:80 -d <IMAGE ID>

02 docker port 80 and success run

El container utilizar el puerto 80, y Docker toma control de este puerto en la RaspberryPI. La dirección IP de la raspberry pi es [192.168.1.58], así que ya puedo realizar pruebas con Postman para analizar imágenes en la RPI.

03 docker image analysis from postman

Super cool. Un potente y barato server de análisis de imágenes basado en un proyecto de CustomVision por menos de $30 !

Happy coding!

Greetings @ Burlington

El Bruno

References

My Posts

  1. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  2. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  3. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, drawing frames
  4. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, calculate FPS
  5. Can’t install Docker on Windows 10 Home, need Pro or Enterprise
  6. Running a Custom Vision project in a local Docker Container
  7. Analyzing images in a Console App using a Custom Vision project in a Docker Container
  8. Analyzing images using PostMan from a Custom Vision project hosted in a Docker Container
  9. Building the CustomVision.ai project in Docker in a RaspberryPi
  10. Container dies immediately upon successful start in a RaspberryPi. Of course, it’s all about TensorFlow dependencies

Windows 10 and YOLOV2 for Object Detection Series

#Docker – About ports, IPs and more to access a container hosted in a #RaspberryPi

Hi !

So, my CustomVision.ai image is build and running in a container in my Raspberry Pi 3. It’s time to see if I can use it from other devices in the same network. When I run my image I defined IP and Port, but if you want to know these information, the following command is very useful

sudo docker port <CONTAINER ID>

01 docker port

So, my container is listening at 127.0.0.1 in port 80. That’s cool for local processing, however I want to access my container from other devices in the same network. In order to do this, I’ll run my image with the following command (I’m not defining the IP, just the port 80)

sudo docker run -p 80:80 -d <IMAGE ID>

02 docker port 80 and success run

The container is using the port 80, and docker is taking over this port in my device. My Raspberry PI device IP is [192.168.1.58], so I can go back and make some tests using Postman to analyze images in the device.

03 docker image analysis from postman

That’s cool. A small CustomVision image analyzer server for less than $30 !

Happy coding!

Greetings @ Toronto

El Bruno

References

My Posts

  1. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  2. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  3. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, drawing frames
  4. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, calculate FPS
  5. Can’t install Docker on Windows 10 Home, need Pro or Enterprise
  6. Running a Custom Vision project in a local Docker Container
  7. Analyzing images in a Console App using a Custom Vision project in a Docker Container
  8. Analyzing images using PostMan from a Custom Vision project hosted in a Docker Container
  9. Building the CustomVision.ai project in Docker in a RaspberryPi
  10. Container dies immediately upon successful start in a RaspberryPi. Of course, it’s all about TensorFlow dependencies

Windows 10 and YOLOV2 for Object Detection Series