#Azure – Sending custom Telemetry and Event information from a #RaspberryPi device to #AzureIoT Central

Hi!

Azure IoT Central is one of the amazing services we can use on Azure. I was wondering how easy is to use a Raspberry Pi using Raspbian and Azure IoT and here is my experience.

Let’s assume we had a device up to date using Raspbian, our next step will be to create an Azure IoT Central application. The official step by step is the main resource for this

Create an
Azure IoT Central application (see references)

Once we have our application, we can quickly create a new Raspberry Pi device and use it. However, I’ll do an extra step, lessons learned as a handsome developer

Create a Device Template

Go to [Device Templates] and create a new template

azure iot central create new device template

For Raspberry Pi, I’ll name this [Raspberry Pi Dev]

azure iot central create new device template raspberry pi dev

So now, I can add a new real device, in the Devices section from the left menu

azure iot central raspberry pi dev add new real device

Once you create a new real device, is important to copy and save for later the connection information. To access this, go to the top right [Connect] button

azure iot central raspberry pi dev real device connect information

Almost there, there is an official tutorial that explain how to send random telemetry information with a Python script in a Raspberry Pi. I’ll use it as base for this scenario.

Connect a
Raspberry Pi to your Azure IoT Central application (Python) (see references)

For this demo, I’ll add a custom telemetry property and a custom event to the device. Since I won’t use the device to track temperature, accelerometer, and more, I think it make sense to track some custom information.

So, I’ll go back to my Device Template definition and I’ll add a new Telemetry, named [t1], with the following information.

azure iot central raspberry pi dev new telemetry information

And now, I can run a custom version of my script that will send new telemetry information, for [t1]. Sample in line 18

After a couple of minutes running the sample script, I can see the telemetry information for T1. In this view, I enabled [Temperature] and [T1] to display the timeline.

azure iot central raspberry pi dev real device dashboard telemetry

And, next step will be to add an event, which is also a very important uses case in Azure IoT. Back in the Device Template, I add a new event named [event1]

azure iot central raspberry pi dev new event information

And added some extra lines of code to send also an event between telemetry, Line 22

In the following image, we can see how the events appears in the timeline, and we can also get some extra details clicking on each event.

azure iot central raspberry pi dev real device dashboard telemetry and events

Very cool! Next steps will be to integrate this with some image recognition scenarios.

Happy Coding!

Greetings @ Burlington

El Bruno

References

Advertisements

#Event – Resources for the sessions about #DeepLearning and #CustomVision at the @ChicagoCodeCamp

Hi!

Another post-event post, this time with a big thanks to the team behind one of the most amazing event I’ve been this year: Chicago CodeCamp.

I had the chance to meet a lot of amazing people, to learn a lot during the sessions and also to visit the great city of Chicago.

As usual, now it’s time to share slides, code and more.

Deep Learning for Everyone? Challenge Accepted!

Let’s start with the Deep Learning resources


Demos Source Code: https://github.com/elbruno/events/tree/master/2019%2005%2011%20Chicago%20CodeCamp%20Deep%20Learning

Session: How a PoC at home can scale to Enterprise Level using Custom Vision APIs

And also the [How a PoC at home can scale to Enterprise Level using Custom Vision APIs] resources

Demos Source Code: https://github.com/elbruno/events/tree/master/2019%2005%2011%20Chicago%20CodeCamp%20CustomVision

And finally, some Machine Learning.Net, Deep Learning and Custom Vision resources:

My posts on Custom Vision and ONNX

  1. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  2. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  3. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, drawing frames
  4. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, calculate FPS
  5. Can’t install Docker on Windows 10 Home, need Pro or Enterprise
  6. Running a Custom Vision project in a local Docker Container
  7. Analyzing images in a Console App using a Custom Vision project in a Docker Container
  8. Analyzing images using PostMan from a Custom Vision project hosted in a Docker Container
  9. Building the CustomVision.ai project in Docker in a RaspberryPi
  10. Container dies immediately upon successful start in a RaspberryPi. Of course, it’s all about TensorFlow dependencies
  11. About ports, IPs and more to access a container hosted in a Raspberry Pi
  12. Average response times using a CustomVision.ai docker container in a RaspberryPi and a PC

Windows 10 and YOLOV2 for Object Detection Series

See you next one in Chicago for some Deep Learning fun!

Happy coding!

Greetings @ Toronto

El Bruno

#CustomVision – Es el momento de mover los proyectos de Custom Vision a #Azure!

Buenas !

Durante las ultimas semanas he escrito mucho sobre Custom Vision, ejemplos sobre como exportar modelos a formato ONNX o a imágenes para Docker; y luego utilizar estos modelos en apps de Consola, o en UWP Apps, inclusive con Docker en una Raspberry Pi. A este post lo tengo en borrador desde hace un tiempo, por lo que lo mejor sera que lo publique lo antes posible.

Si eres usuario de CustomVision.ai, seguramente has visto el siguiente mensaje cuando accedes al portal. El mismo nos avisa que el servicio dejara de estar disponible en modo preview / test el día 2019-03-19. Esto implica que si quieres seguir utilizando CV, debes mover tus proyectos a Azure.

Custom Vision moved to Azure

Una opción puede ser crear nuevamente los proyectos de CV, cargar las imágenes y hacer todo el proceso de etiquetado y entrenamiento desde cero. Esa opción es valida. Sin embargo, los nuevos proyectos tendrán nuevos IDs y también nuevas URLs para acceder a los HTTP EndPoints de los mismos. La otra opción es [mover a Azure] los proyectos de CV.

Lo primero que debemos hacer es crear un Custom Vision resource en una suscripción de Azure. Si conoces Azure estos son 2 clics, y muy fáciles.

azure custom vision resource

Podemos seguir utilizando un plan Free, con los siguientes parámetros:

  • Up to 2 projects
  • Limit of 5000 training images
  • 2 transactions per seconds
  • Limit of 10000 predictions per month

Custom Vision Azure Prices

Una vez creado el resource en Azure, debemos volver al portal de CustomVision.ai, seleccionar el proyecto que queremos migrar y en la sección Settings veremos una opcion [Move to Azure] en la esquina izquierda inferior.

Custom Vision move to Azure button

Teniendo en cuenta que solo podemos usar Proyectos de CV en una única región, por ahora, tendremos que completar los datos para mover el proyecto. Y listo! El proyecto de CV esta migrado a Azure 😀

Custom Vision move to Azure only in South Central

Happy Coding!

Greetings @ Toronto

El Bruno

Resources

  1. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  2. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  3. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, drawing frames
  4. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, calculate FPS
  5. Can’t install Docker on Windows 10 Home, need Pro or Enterprise
  6. Running a Custom Vision project in a local Docker Container
  7. Analyzing images in a Console App using a Custom Vision project in a Docker Container
  8. Analyzing images using PostMan from a Custom Vision project hosted in a Docker Container
  9. Building the CustomVision.ai project in Docker in a RaspberryPi
  10. Container dies immediately upon successful start in a RaspberryPi. Of course, it’s all about TensorFlow dependencies
  11. About ports, IPs and more to access a container hosted in a Raspberry Pi
  12. Average response times using a CustomVision.ai docker container in a RaspberryPi and a PC

Windows 10 and YOLOV2 for Object Detection Series

 

#CustomVision – It’s time to move your Custom Vision projects to #Azure!

Hi !

I’ve been writing a lot about Custom Vision, and how use and export CV models to ONNX or docker images to be used later in different types of scenarios. I got this post in draft mode, so it’s time to publish it.

If you are using CustomVision.ai, you probably notice the warning message about the service being moved from a preview / test stage on 2019-03-19. That’s mean that you need to move your CV projects to a valid Azure account if you want to use them.

Custom Vision moved to Azure

You may want to create and train again some cv projects, however you will get new project ids, new urls and you need to tag again all the images.

The 1st action here, is to create a Custom Vision resource in a valid Azure account. That’s a 2 click tutorial and it’s also very easy.

azure custom vision resource

There is also the option to continue working in a free mode scenario with the following parameters in the Free Instance:

  • Up to 2 projects
  • Limit of 5000 training images
  • 2 transactions per seconds
  • Limit of 10000 predictions per month

Custom Vision Azure Prices

Now we can go back to the Custom Vision.ai portal and select the project we want to migrate to Azure. In the Settings section, at the bottom left corner we have the [Move to Azure] option.

Custom Vision move to Azure button

Here we need to select the specific values of the resource we created before and that’s it! The Custom Vision project now is fully migrated to Azure 😀

Custom Vision move to Azure only in South Central

Happy Coding!

Greetings @ Toronto

El Bruno

Resources

  1. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  2. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  3. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, drawing frames
  4. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, calculate FPS
  5. Can’t install Docker on Windows 10 Home, need Pro or Enterprise
  6. Running a Custom Vision project in a local Docker Container
  7. Analyzing images in a Console App using a Custom Vision project in a Docker Container
  8. Analyzing images using PostMan from a Custom Vision project hosted in a Docker Container
  9. Building the CustomVision.ai project in Docker in a RaspberryPi
  10. Container dies immediately upon successful start in a RaspberryPi. Of course, it’s all about TensorFlow dependencies
  11. About ports, IPs and more to access a container hosted in a Raspberry Pi
  12. Average response times using a CustomVision.ai docker container in a RaspberryPi and a PC

Windows 10 and YOLOV2 for Object Detection Series

 

#Azure – Single Key for all Services in #CognitiveServices, that’s cool :D

Hi !

Quick Friday post. And an amazing one, because now we have the chance to create a single Key to be use around a bunch of Cognitive Services. That’s mean we don’t need to remember and store different keys for LUIS, Face Emotion, and more. A single key will cover most of these scenarios with 3 simple steps

 

More information in What’s New? A Single Key for Cognitive Services on Channel 9 by Noelle LaCharite

Happy coding!

Greetings @ Burlington

El Bruno

#Event – Getting Started with Machine Learning.Net & Windows Machine Learning on Nov 22 in Mississauga

abstract blackboard bulb chalk
Photo by Pixabay on Pexels.com

Hi !

So my friends from the Mississauga .Net User Group (link) were kind enough to invite me to host a session on November 22th, in TEK Systems in Mississauga. I’ll share some of the updates on ML.Net, currently in version 0.6 and some other very cool stuff around Microsoft and AI.

You can register to the event here and the formal description is:

Machine Learning has moved out of the lab and into production systems. Understanding how to work with this technology is one of the essential skills for developers today. In this session, you will learn the basics of machine learning, how to use existing models and services in your apps, and how to get started with creating your own simple models.
In other words, if you are a .Net developer, this session is for you. We will cover the basis of Machine Learning.Net, a complete ML framework to work with C#, F# or any other .Net Core language.

Happy coding!

Greetings @ Toronto

El Bruno

Event – Artificial Intelligence and Machine Learning in #Azure on Nov 14 in Toronto

action android device electronics
Photo by Matan Segev on Pexels.com

Hi !

So my friends from the Azure Group Meetup (link) were kind enough to invite to host a session on November 14th, in the best place ever: Avanade Toronto office! So I’ll match an Innovation Day with a session or Artificial Intelligence in Azure.

You can register to the event here and the formal description is:

Join us for a review of AI, Machine Learning and Cognitive Services in Microsoft Azure. In this session we will discuss some of the most popular tools and frameworks available for AI in Azure: Cognitive Services, Azure Machine Learning, and the Azure Bot Service.

You will get to see lots of code and demos of some real world applications using the various different Cognitive services including vision, speech, and language.

Happy coding!

Greetings @ Toronto

El Bruno

#Kinect – More information about the depth sensor in Project Kinect for #Azure (and #Hololens Bonus!)

Hi!

During the past Faculty Summit 2018, one of the exhibitions presented some details of the future [Project Kinect for Azure]. The following video gives a demonstration of the quality of the depth sensor in near and remote mode. In both demonstrations you can see that the same is much more powerful than the sensors we know.

And a special focus at 01:50 in the video, when a reference is made that this technology will be used in the next version of Hololens. The tracking options for the new device will be impressive.

Greetings @ Burlington

El Bruno

References

#Kinect – Mas información sobre el sensor de profundidad en Project Kinect for #Azure (#Hololens bonus!)

Buenas!

Durante el pasado Faculty Summit 2018, una de las exposiciones presentaba algunos detalles del futuro [Project Kinect for Azure]. El siguiente video da una demostración de la calidad del sensor de profundidad en modo cercano y en modo lejano. En ambas demonstraciones se puede apreciar que el mismo es mucho mas potente que los sensores que conocemos.

Y un detalle especial en el minuto 01:50 cuando se hace referencia a que esta tecnología se utilizara en la próxima version de Hololens. Tomando esto como base, las opciones de tracking del nuevo device serán impresionantes.

Saludos @ Burlington

El Bruno

References

#Event – Resources used on the event [Introduction to Microsoft Azure Security]

giphy

Hi!

Amazing event last Friday [Power of the Cloud]. While it seemed to me that the 90 minutes of the session [Introduction To Microsoft Azure Security] were to be many, the fact of sharing them with Adin (@adinermie) make the session run smooth.

As always, now is the time to share the slides that we use in the event

Slides

And the most important resources

Microsoft Build Sessions

Secure DevOps Kit for Azure (AzSK)

Happy coding!

Greetings @ Mississauga

El Bruno

Pd: As you read this, it is very likely that it will be in the middle of my session [Getting Started With Machine Learning] of the Tour InsiderDevTour.