#Microsoft – Insider Programs at Microsoft (thanks to @shanselman!)

Hi!

3 years ago, I wrote a post where I listed some of the Insider Programs at Microsoft.

Microsoft – List of almost all #Insider Programs (the ones I know …)

This was on 2016 November, and I added in the list Windows Insider, Office Insider, Visual Studio Insider, the almost dead Skype Insider and more.

Now thanks to Scott Hanselman (@shanselman) I found an official resource with all the Insider Programs at Microsoft.

Insider Programs at Microsoft

Up to today, we can find Bing, Edge, Office, Skype (still!), Visual Studio, Visual Studio Code, Windows and Xbox. Super cool!

Happy coding!

Greetings @ Burlington

El Bruno

Advertisements

#Python – How to create a Virtual Environment in #Windows10 (Easy one, and brain backup!)

Hi!

Quick post today, and mostly a reminder on how to create a Virtual Environment in Python in Windows 10. I’ve been doing this mostly in my Mac and my Raspberry Pi, and I always forget how to do this on Windows, so … I’ll write this post to have this

Install Python3.

Download the installers from the official Python source (see references). I usually install it on the root of my C: hard drive and name it with the version. In example: c:\Python37_64 folder.

Remember to also add this folder and the Scripts folder to the Environment Variables.

Note: Once you start to install tools which uses Python, your OS becomes a nightmare. You will have your own installed python, the version installed with Visual Studio, the one with Visual Studio Code, another one for Anaconda.

I’m not sure if this is a best practice or not, but I usually remove all the other versions and keep mines in the root of the C: drive.

Ok, let’s go on.

For a new virtual environment named “testEnv”, open a command prompt and navigate to the python folder. Then run the command

python -m virtualenv testEnv

After a couple of seconds, the virtual Environment will be installed, and you can use it by run the [activate.bat] script. In this example

  • The virtual environment will be created at [c:\Python37_64\testEnv]
  • The virtual environment activate script will be at [c:\Python37_64\testEnv\Scripts\activate.bat]

So the full command sequence is similar to this one

Now you have your virtual environment up and running and you may want to start to add your own packages or tools. Like in example: Numpy or to list the installed packages

Finally, you may leave the virtual environment with the command

deactivate

Happy Coding!

Greetings @ Toronto

El Bruno

Resources

#Personal – Amazing surprises managing the internet traffic at home #RaspberryPi #PiHole #Windows10

Hi !

A couple of days ago, my friend Luca (@lucavgobbi) told me about PiHole: A Network-wide Ad Blocking. Even better, let me copy the official description from their GitHub repo:

The Pi-hole® is a DNS sinkhole that protects your devices from unwanted content, without installing any client-side software.

Easy-to-install: our versatile installer walks you through the process, and takes less than ten minutes

Resolute: content is blocked in non-browser locations, such as ad-laden mobile apps and smart TVs

Responsive: seamlessly speeds up the feel of everyday browsing by caching DNS queries

Lightweight: runs smoothly with minimal hardware and software requirements

Robust: a command line interface that is quality assured for interoperability

Insightful: a beautiful responsive Web Interface dashboard to view and control your Pi-hole

Versatile: can optionally function as a DHCP server, ensuring all your devices are protected automatically

Scalable: capable of handling hundreds of millions of queries when installed on server-grade hardware

Modern: blocks ads over both IPv4 and IPv6

Free: open source software which helps ensure you are the sole person in control of your privacy

I setup this in an extra Raspberry Pi 3 that I have at home, and keep it running for the last couple of days. I was in shock when I realized that aprox 30% of my internet traffic is … not so good.

PiHole dashboard

One of the cool features of PiHole, os that you can work with their logs. So I decided to apply some very powerful Machine Learning algorithms to detects anomalies and strange behaviors.

In the meantime, I decided to read the logs, and make some filters just using Excel. And I found a lot of very strange urls. Today I’ll share some of the Microsoft ones.

So, in example, do you know what does this set of urls have in common?

  • location-inference-westus.cloudapp.net
  • licensing.mp.microsoft.com
  • watson.telemetry.microsoft.com

They are all Microsoft endpoints ! It seems that Windows 10 is sending a lot of diagnostic and other type of data. Lucky for us, most of this endpoints are well explained for each one of the Windows 10 versions. So, in example, I don’t use a lot of UWP apps, and it seems to me that the localization service does not need to send a lot of information, from a FIXED PC.

I decided to add some of this domains to the blacklist of domains and so far, so good. Windows is still working amazing, I enabled some of the urls so I can use also Visual Studio and Azure DevOps, and my user experience is still the same (with 30% less of traffic!)

So, I may want to also write about some domains I found other chatty devices uses like my Amazon Alexa, my Roku, and more … maybe in the next post! And kudos to the PiHole team!

Happy Coding!

Greetings @ Burlington

El Bruno

References

#MacOS – Not so good with Screen Resolutions and “even better” at projection mode

Hi !

The first time I was doing a demo and connecting my MacBook Pro to a projector, I found out that the screen settings and configuration in a MacOS are completely different to the usual Windows experience. And, IMHO, not in a good way.

In Windows, when you are working with 2 or more screens (monitors / projectors) you have the chance to define specific settings for each one. This settings include orientation, resolution, and more.

MacOS display runs at its native resolution, which is the best resolution designed for the display. To change the resolution, you need to open System Preferences and select displays. Then in the [Display section], we can change the resolutions by selecting the [Scaled] option.

So far, so good. Each option also shows a “Looks like resolution …” options which is great. But, not good enough. Before a demo for a client or a user group, I like to dry run my presentations and demos using a custom resolution. You never know, which hardware is going to be at your final destination.

After testing a couple of apps, I finally found a very good one with the features I need:

Disable Monitor, https://github.com/Eun/DisableMonitor

Disable Monitor allows me to quickly change any screen resolution with 2 clicks; it also lock the screens and detect monitors.

Simple and easy. Happy coding!

Greeting @ Toronto

El Bruno

My posts on experiences in Mac from a Windows user

#Windows10 – Windows #VisionSkills sample UWP App

Hi!

Yesterday the Windows Team announced the preview version of Windows Vision Skills. So today I was browsing the samples in Github and I’ve created a simplified version of the Skeleton tracker using a live feed from a webcam.

Here are some notes about my GitHub sample

  • The UWP App must be Windows 10 version 1809
  • I added the NuGet packages [Microsoft.AI.Skills.Vision.SkeletalDetectorPreview] and [Microsoft.Toolkit.Uwp.UI.Controls]
  • The MainView uses the CameraPreview control from the [Microsoft.Toolkit.Uwp.UI.Controls] toolkit.
  • Each frame is processed and I use a SkeletalBinding to detect Skeletons / bodies
  • The core detection is performed here
        private async Task RunSkillAsync(VideoFrame frame, bool isStream)
        {
            m_evalPerfStopwatch.Restart();

            // Update input image and run the skill against it
            await m_skeletalDetectorBinding.SetInputImageAsync(frame);
            await m_skeletalDetectorSkill.EvaluateAsync(m_skeletalDetectorBinding);

            m_evalPerfStopwatch.Stop();
            m_skeletalDetectionRunTime = m_evalPerfStopwatch.ElapsedMilliseconds;

            await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
            {
                m_bodyRenderer.Update(m_skeletalDetectorBinding.Bodies, !isStream);
                m_bodyRenderer.IsVisible = true;
                UISkillOutputDetails.Text = $"Found {m_skeletalDetectorBinding.Bodies.Count} bodies (took {m_skeletalDetectionRunTime} ms)";
            });
        }
  • There is also a BodyRenderer.cs class used to draw the skeletons on top of the CameraPreview Image control. It draws lines in an empty canvas.

You can download the sample code from here https://github.com/elbruno/Blog/tree/master/20190501%20VisionSkills%20Skeleton%20Sample

Greetings @ Burlington

El Bruno

References

#Windows10 – Windows Vision Skills (Preview), an amazing set of AI APIs to run in the edge!

Hi!

Today’s announcement is a big one if you are interested on move AI capabilities to the Edge. The Windows team make public the preview of Windows Vision Skills framework:

Windows Vision Skills framework is meant to standardize the way AI and CV is put to use within a WinRT application running on the edge. It aims to abstract away the complexity of AI and CV techniques by simply defining the concept of skills which are modular pieces of code that process input and produce output. The implementation that contains the complex details is encapsulated by an extensible WinRT API that inherits the base class present in this namespace, which leverages built-in Windows primitives which in-turn eases interop with built-in acceleration frameworks or external 3rd party ones.

The official blog explain the basic features of the framework and describes a set of scenarios like Object Detector, Skeletal Detector, and Emotion Recognizer.

We have UWP Apps in the repo samples, and it only took 1 min to setup everything to get the App up and running. In the following image, it smoothly detects a person and a chair.

The next image is the sample for Skeletal detector (as a old Kinect dev, this really makes me happy!)

This is an big announcement, because all of this APIs are native , and that means we can easily use them in

Greetings @ Toronto

El Bruno

References


#AI – Mis posts sobre CustomVision.ai, exportando y utilizando ONNX, Docker, en PC, RaspberryPi, MacOS y más !

Buenas !

Ahora que tengo una pausa entre eventos en Canada y USA, y ya he escrito varios posts al respecto, es el tiempo ideal para compilar y compartir los posts que he escrito sobre CustomVision.ai. Sobre como crear un proyecto de reconocimiento de objectos, como utilizar el mismo en modo web, invocando un HTTP Endpoint desde una app de consola. Y también desde aplicaciones en Windows 10 exportando el proyecto a formato ONNX y utilizando Windows ML. Finalmente, un par de post donde explico como utilizar CV.ai con docker en PC, Mac y Raspberry Pi.

  1. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  2. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  3. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, drawing frames
  4. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, calculate FPS
  5. Can’t install Docker on Windows 10 Home, need Pro or Enterprise
  6. Running a Custom Vision project in a local Docker Container
  7. Analyzing images in a Console App using a Custom Vision project in a Docker Container
  8. Analyzing images using PostMan from a Custom Vision project hosted in a Docker Container
  9. Building the CustomVision.ai project in Docker in a RaspberryPi
  10. Container dies immediately upon successful start in a RaspberryPi. Of course, it’s all about TensorFlow dependencies
  11. About ports, IPs and more to access a container hosted in a Raspberry Pi
  12. Average response times using a CustomVision.ai docker container in a RaspberryPi and a PC

Windows 10 and YOLOV2 for Object Detection Series

Greetings @ Burlington

El Bruno

#AI – My posts on CustomVision.ai, running on ONNX, Docker, on PC, RaspberryPi, MacOS and more !

Hi !

After the events in Canada and USA, and several posts, I think it’s time to make a recap of the posts I’ve wrote about CustomVision.ai and how I created a custom object recognition project. And later used this as a web HTTP Endpoint in a Console application. And also in Windows 10 with ONNX using Windows ML; and finally running the Object Recognition project inside a Container in Docker on PC, Mac and Raspberry Pi.

  1. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  2. Object recognition with Custom Vision and ONNX in Windows applications using WinML
  3. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, drawing frames
  4. Object recognition with Custom Vision and ONNX in Windows applications using Windows ML, calculate FPS
  5. Can’t install Docker on Windows 10 Home, need Pro or Enterprise
  6. Running a Custom Vision project in a local Docker Container
  7. Analyzing images in a Console App using a Custom Vision project in a Docker Container
  8. Analyzing images using PostMan from a Custom Vision project hosted in a Docker Container
  9. Building the CustomVision.ai project in Docker in a RaspberryPi
  10. Container dies immediately upon successful start in a RaspberryPi. Of course, it’s all about TensorFlow dependencies
  11. About ports, IPs and more to access a container hosted in a Raspberry Pi
  12. Average response times using a CustomVision.ai docker container in a RaspberryPi and a PC

Windows 10 and YOLOV2 for Object Detection Series

Greetings @ Burlington

El Bruno

#CustomVision – Analizando imágenes en una Console App utilizando un proyecto de #CustomVision en #Docker Container

Buenas !

Este es un post especial, ya que es el 1ro que escribo completamente desde mi . Estoy seguro que  Javier (@jsuarezruiz), Yeray (@JosueYeray), Braulio (@braulio_sl), Luis, Sara, Roberto y otros mac users estarían orgullosos de mi 😀

Basado en el post anterior, he compilado y ejecutado mi proyecto Custom Vision Marvel en Docker para Mac. La experiencia es buenísima, y bash también es una novedad interesante!

docker build -t elbruno/cvmarvel:3.0 .

01 doker build on mac

El siguiente paso es obtener en ID y ejecutar la misma.

03 docker list images and run image

El paso final es utilizar CURL para hacer una petición HTTP Post con una imagen para analizar. Es muy simple, salvo que me tomo unos minutos y unas búsquedas en bing el darme cuenta que hay utilizar el prefijo @ en la llamada desde la consola! Iron Fist detected !

curl -X POST http://127.0.0.1:8080/image -F imageData=@img1.jpg 

05 docker bash ls image analyzed and source image.png

Ok, el entorno de pruebas con Docker esta funcionando, así que ahora es momento de utilizar Visual Studio for Mac. En realidad la app es una .Net Core Console App, que podría crear en Visual Studio Code, pero esta es la excusa perfecta para comenzar a conocer Visual Studio for Mac.

Mi codigo de pruebas esta en Azure DevOps, así que después de sincronizar los repositorios, ya pude crear un nuevo proyecto a mi solución.

06 new netcore project in visual studio for mac

Un par de lineas de código C# en la console app y ya pude realizar el análisis de la imagen utilizando el contenedor con el proyecto de Custom Vision

07 console app in vs for mac detected image

El código es muy simple:

Happy coding!

Saludos @ Toronto

El Bruno

References

My Posts

Windows 10 and YOLOV2 for Object Detection Series

#CustomVision – Analyzing images in a Console App using a #CustomVision project in a #Docker Container

Hi !

This is a special post. It’s the 1st one I write completely in my MacBook, so I’m sure that Javier (@jsuarezruiz), Yeray (@JosueYeray), Braulio (@braulio_sl), Luis, Sara, Roberto and other mac users will be proud of me 😀

So, I build and run my Custom Vision Marvel project in Docker for Mac. Smooth build and also a fast one!

docker build -t elbruno/cvmarvel:3.0 .

01 doker build on mac

Then get the image id and run the image

03 docker list images and run image

Final step is to play around with curl in bash to post the image (the file name with @ prefix took me some bing searches). Iron Fist detected !

curl -X POST http://127.0.0.1:8080/image -F imageData=@img1.jpg 

05 docker bash ls image analyzed and source image.png

Ok, the environment is working, so it’s time to create a .NetCore Console App to test this using amazing C# code. I have all my code in Azure Dev Ops, so I sync my repo and  added a new project in my current solution

06 new netcore project in visual studio for mac

Some C# lines in my console app and I was able to analyze a local picture using the Custom Vision Model in a container

07 console app in vs for mac detected image

The source code is very simple

Happy coding!

Greetings @ Toronto

El Bruno

References

My Posts

Windows 10 and YOLOV2 for Object Detection Series